report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Pipeline transportation for hazardous liquids and natural gas is the safest form of freight transportation. By one measure, the annual number of accidents, the hazardous liquid pipeline industry’s safety record has greatly improved over the past 10 years. (See fig. 1.) From 1994 through 2003, accidents on interstate hazardous liquid pipelines decreased by almost 49 percent from 245 in 1994 to 126 in 2003. However, the industry’s safety record for these pipelines has not improved for accidents with the greatest consequences—those resulting in a fatality, injury, or property damage totaling $50,000 or more—which we term serious accidents. The number of serious accidents stayed about the same over the 10-year period—about 88 every year. The overall accident rate for hazardous liquid pipelines—which considers both the amounts of products and the distances shipped—decreased from about 0.41 accidents per billion ton-miles shipped in 1994 to about 0.25 accidents per billion ton-miles shipped in 2002. The accident rate for serious interstate hazardous liquid pipeline accidents stayed the same, averaging about 0.15 accidents per billion ton-miles shipped from 1994 through 2002. In contrast to the decreasing number of accidents overall for hazardous liquid pipelines, the annual number of accidents on interstate natural gas pipelines increased by almost 20 percent from 81 in 1994 to 97 in 2003. (See fig. 2.) The number of serious accidents on interstate natural gas pipelines also increased, from 64 in 1994 to 84 in 2003, though they have fluctuated considerably over this time. Information on accident rates for natural gas pipelines is not available because of the lack of data on the amount of natural gas shipped through pipelines. For both hazardous liquid and natural gas pipelines, the lack of improvement in the number of serious accidents may be due in part to the relatively small number of these accidents. OPS, within the Department of Transportation’s Research and Special Programs Administration (RSPA), administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquids by pipeline. The office attempts to ensure the safe operation of pipelines through regulation, national consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement when safety problems are found. The office uses a variety of enforcement tools, such as compliance orders and corrective action orders that require pipeline operators to correct safety violations, notices of amendment to remedy deficiencies in operators’ procedures, administrative actions to address minor safety problems, and civil penalties. OPS is a small federal agency. In fiscal year 2003, OPS employed about 150 people, about half of whom were pipeline inspectors. Before imposing a civil penalty on a pipeline operator, OPS issues a notice of probable violation that documents the alleged violation and a notice of proposed penalty that identifies the proposed civil penalty amount. Failure by an operator to inspect the pipeline for leaks or unsafe conditions is an example of a violation that may lead to a civil penalty. OPS then allows the operator to present evidence either in writing or at an informal hearing. Attorneys from RSPA’s Office of Chief Counsel preside over these hearings. Following the operator’s presentation, the civil penalty may be affirmed, reduced, or withdrawn. If the hearing officer determines that a violation did occur, the OPS’s associate administrator issues a final order that requires the operator to correct the safety violation (if a correction is needed) and pay the penalty (called the “assessed penalty”). The operator has 20 days after the final order is issued to pay the penalty. The Federal Aviation Administration (FAA) collects civil penalties for OPS. From 1992 through 2002, federal law allowed OPS to assess up to $25,000 for each day a violation continued, not to exceed $500,000 for any related series of violations. In December 2002, the Pipeline Safety Improvement Act increased these amounts to $100,000 and $1 million, respectively. The effectiveness of OPS’s enforcement strategy cannot be determined because OPS has not incorporated three key elements of effective program management—clear performance goals for the enforcement program, a fully defined strategy for achieving these goals, and performance measures linked to goals that would allow an assessment of the enforcement strategy’s impact on pipeline safety. OPS’s enforcement strategy has undergone significant changes in the last 5 years. Before 2000, the agency emphasized partnering with the pipeline industry to improve pipeline safety rather than punishing noncompliance. In 2000, in response to concerns that its enforcement was weak and ineffective, the agency decided to institute a “tough but fair” enforcement approach and to make greater use of all its enforcement tools, including larger and more frequent civil penalties. In 2001, to further strengthen its enforcement, OPS began issuing more corrective action orders requiring operators to address safety problems that led or could lead to pipeline accidents. In 2002, OPS created a new Enforcement Office to focus more on enforcement and help ensure consistency in enforcement decisions. However, this new office is not yet fully staffed, and key positions remain vacant. In 2002, OPS began to enforce its new integrity management and operator qualification standards in addition to its minimum safety standards. Initially, while operators were gaining experience with the new, complex integrity management standards, OPS primarily used notices of amendment, which require improvements in procedures, rather than stronger enforcement actions. Now that operators have this experience, OPS has begun to make greater use of civil penalties in enforcing these standards. OPS has also recently begun to reengineer its enforcement program. Efforts are under way to develop a new enforcement policy and guidelines, develop a streamlined process for handling enforcement cases, modernize and integrate the agency’s inspection and enforcement databases, and hire additional enforcement staff. However, as I will now discuss, OPS has not put in place key elements of effective management that would allow it to determine the impact of its evolving enforcement program on pipeline safety. Although OPS has overall performance goals, it has not established specific goals for its enforcement program. According to OPS officials, the agency’s enforcement program is designed to help achieve the agency’s overall performance goals of (1) reducing the number of pipeline accidents by 5 percent annually and (2) reducing the amount of hazardous liquid spills by 6 percent annually. Other agency efforts—including the development of a risk-based approach to finding and addressing significant threats to pipeline safety and of education to prevent excavation-related damage to pipelines—are also designed to help achieve these goals. OPS’s overall performance goals are useful because they identify the end outcomes, or ultimate results, that OPS seeks to achieve through all its efforts. However, OPS has not established performance goals that identify the intermediate outcomes, or direct results, that OPS seeks to achieve through its enforcement program. Intermediate outcomes show progress toward achieving end outcomes. For example, enforcement actions can result in improvements in pipeline operators’ safety performance—an intermediate outcome that can then result in the end outcome of fewer pipeline accidents and spills. OPS is considering establishing a goal to reduce the time it takes the agency to issue final enforcement actions. While such a goal could help OPS improve the management of the enforcement program, it does not reflect the various intermediate outcomes the agency hopes to achieve through enforcement. Without clear goals for the enforcement program that specify intended intermediate outcomes, agency staff and external stakeholders may not be aware of what direct results OPS is seeking to achieve or how enforcement efforts contribute to pipeline safety. OPS has not fully defined its strategy for using enforcement to achieve its overall performance goals. According to OPS officials, the agency’s increased use of civil penalties and corrective action orders reflects a major change in its enforcement strategy. Although OPS began to implement these changes in 2000, it has not yet developed a policy that defines this new, more aggressive enforcement strategy or describes how it will contribute to the achievement of its performance goals. In addition, OPS does not have up-to-date, detailed internal guidelines on the use of its enforcement tools that reflect its current strategy. Furthermore, although OPS began enforcing its integrity management standards in 2002 and received greater enforcement authority under the 2002 pipeline safety act, it does not yet have guidelines in place for enforcing these standards or implementing the new authority provided by the act. According to agency officials, OPS management communicates enforcement priorities and ensures consistency in enforcement decisions through frequent internal meetings and detailed inspection protocols and guidance. Agency officials recognize the need to develop an enforcement policy and up-to-date detailed enforcement guidelines and have been working to do so. To date, the agency has completed an initial set of enforcement guidelines for its operator qualification standards and has developed other draft guidelines. However, because of the complexity of the task, agency officials do not expect that the new enforcement policy and remaining guidelines will be finalized until sometime in 2005. The development of an enforcement policy and guidelines should help define OPS’s enforcement strategy; however, it is not clear whether this effort will link OPS’s enforcement strategy with intermediate outcomes, since agency officials have not established performance goals specifically for their enforcement efforts. We have reported that such a link is important. According to OPS officials, the agency currently uses three performance measures and is considering three additional measures to determine the effectiveness of its enforcement activities and other oversight efforts. (See table 1.) The three current measures provide useful information about the agency’s overall efforts to improve pipeline safety, but do not clearly indicate the effectiveness of OPS’s enforcement strategy because they do not measure the intermediate outcomes of enforcement actions that can contribute to pipeline safety, such as improved compliance. The three measures that OPS is considering could provide more information on the intermediate outcomes of the agency’s enforcement strategy, such as the frequency of repeat violations and the number of repairs made in response to corrective action orders, as well as other aspects of program performance, such as the timeliness of enforcement actions. We have found that agencies that are successful in measuring performance strive to establish measures that demonstrate results, address important aspects of program performance, and provide useful information for decision-making. While OPS’s new measures may produce better information on the performance of its enforcement program than is currently available, OPS has not adopted key practices for achieving these characteristics of successful performance measurement systems: Measures should demonstrate results (outcomes) that are directly linked to program goals. Measures of program results can be used to hold agencies accountable for the performance of their programs and can facilitate congressional oversight. If OPS does not set clear goals that identify the desired results (intermediate outcomes) of enforcement, it may not choose the most appropriate performance measures. OPS officials acknowledge the importance of developing such goals and related measures but emphasize that the diversity of pipeline operations and the complexity of OPS’s regulations make this a challenging task. Measures should address important aspects of program performance and take priorities into account. An agency official told us that a key factor in choosing final measures would be the availability of supporting data. However, the most essential measures may require the development of new data. For example, OPS has developed databases that will track the status of safety issues identified in integrity management and operator qualification inspections, but it cannot centrally track the status of safety issues identified in enforcing its minimum safety standards. Agency officials told us that they are considering how to add this capability as part of an effort to modernize and integrate their inspection and enforcement databases. Measures should provide useful information for decision-making, including adjusting policies and priorities. OPS uses its current measures of enforcement performance in a number of ways, including monitoring pipeline operators’ safety performance and planning inspections. While these uses are important, they are of limited help to OPS in making decisions about its enforcement strategy. OPS has acknowledged that it has not used performance measurement information in making decisions about its enforcement strategy. OPS has made progress in this area by identifying possible new measures of enforcement results (outcomes) and other aspects of program performance, such as indicators of the timeliness of enforcement actions, that may prove more useful for managing the enforcement program. In 2000, in response to criticism that its enforcement activities were weak and ineffective, OPS increased both the number and the size of the civil monetary penalties it assessed. Pipeline safety stakeholders expressed differing opinions about whether OPS’s civil penalties are effective in deterring noncompliance with pipeline safety regulations. OPS assessed more civil penalties during the past 4 years under its current “tough but fair” enforcement approach than it did in the previous 5 years, when it took a more lenient enforcement approach. (See fig. 3.) From 2000 through 2003, OPS assessed 88 civil penalties (22 per year on average) compared with 70 civil penalties from 1995 through 1999 (about 14 per year on average). For the first 5 months of 2004, OPS proposed 38 civil penalties. While the recent increase in the number and the size of civil penalties may reflect OPS’s new “tough but fair” enforcement approach, other factors, such as more severe violations, may be contributing to the increase as well. Overall, OPS does not use civil penalties extensively. Civil penalties represent about 14 percent (216 out of 1,530) of all enforcement actions taken over the past 10 years. OPS makes more extensive use of other types of enforcement actions that require pipeline operators to fix unsafe conditions and improve inadequate procedures, among other things. In contrast, civil penalties represent monetary sanctions for violating safety regulations but do not require safety improvements. OPS may increase its use of civil penalties as it begins to use them to a greater degree for violations of its integrity management standards. The average size of the civil penalties has increased. For example, from 1995 through 1999, the average assessed civil penalty was about $18,000. From 2000 through 2003, the average assessed civil penalty increased by 62 percent to about $29,000. Assessed penalty amounts ranged from $500 to $400,000. In some instances, OPS reduces proposed civil penalties when it issues its final order. We found that penalties were reduced 31 percent of the time during the 10-year period covered by our work (66 of 216 instances). These penalties were reduced by about 37 percent (from a total of $2.8 million to $1.7 million). This analysis does not include the extraordinarily large penalty of $3.05 million that OPS proposed as a result of the Bellingham, Washington, accident because including it would have skewed our results by making the average penalty appear to be larger than it actually is. OPS has assessed the operator $250,000 as of July 2004. If we had included this penalty in our analysis we find that over this period OPS reduced total proposed penalties by about two-thirds, from about $5.8 million to about $2 million. OPS’s database does not provide summary information on why penalties are reduced. According to an OPS official, the agency reduces penalties when an operator presents evidence that the OPS inspector’s finding is weak or wrong or when the pipeline’s ownership changes during the period between the proposed and the assessed penalty. It was not practical for us to gather information on a large number of penalties that were reduced, but we did review several to determine the reasons for the reductions. OPS reduced one of the civil penalties we reviewed because the operator provided evidence that OPS inspectors had miscounted the number of pipeline valves that OPS said the operator had not inspected. Since the violation was not as severe as OPS had stated, OPS reduced the proposed penalty from $177,000 to $67,000. Because we reviewed only a small number of instances in which penalties were reduced, we cannot say whether these examples are typical. Of the 216 penalties that OPS assessed from 1994 through 2003, pipeline operators paid the full amount 93 percent of the time (200 instances) and reduced amounts 1 percent of the time (2 instances). (See fig. 4.) Fourteen penalties (6 percent) remain unpaid, totaling about $836,700 (or 18 percent of penalty amounts). In two instances, operators paid reduced amounts. We followed up on one of these assessed penalties. In this case, the operator requested that OPS reconsider the assessed civil penalty and OPS reduced it from $5,000 to $3,000 because the operator had a history of cooperation and OPS wanted to encourage future cooperation. Neither FAA’s nor OPS’s data show why the 14 unpaid penalties have not been collected. From the information provided by both agencies, we determined that OPS closed 2 of the penalty cases without collecting the penalties, operators are appealing 5 penalties, OPS recently assessed 3 penalties, and OPS acknowledged that 4 penalties (totaling $45,200) should have been collected. Although OPS has increased both the number and the size of the civil penalties it has imposed, the effect of this change on deterring noncompliance with safety regulations, if any, is not clear. The stakeholders we spoke with expressed differing views on whether the civil penalties deter noncompliance. The pipeline industry officials we contacted believed that, to a certain extent, OPS’s civil penalties encourage pipeline operators to comply with pipeline safety regulations because they view all of OPS’s enforcement actions as deterrents to noncompliance. However, some industry officials said that OPS’s enforcement actions are not their primary motivation for safety. Instead, they said that pipeline operators are motivated to operate safely because they need to avoid any type of accident, incident, or OPS enforcement action that impedes the flow of products through the pipeline and hinders their ability to provide good service to their customers. Pipeline industry officials also said that they want to operate safely and avoid pipeline accidents because accidents generate negative publicity and may result in costly private litigation against the operator. Most of the interstate agents, representatives of their associations, and insurance company officials expressed views similar to those of the pipeline industry officials, saying that they believe civil penalties deter operators’ noncompliance with regulations to a certain extent. However, a few disagreed with this point of view. For example, the state agency representatives and a local government official said that OPS’s civil penalties are too small to be deterrents. Pipeline safety advocacy groups that we talked to also said that the civil penalty amounts OPS imposes are too small to have any deterrent effect on pipeline operators. As discussed earlier, for 2000 through 2003, the average assessed penalty was about $29,000. According to economic literature on deterrence, pipeline operators may be deterred if they expect a sanction, such as a civil penalty, to exceed any benefits of noncompliance. Such benefits could, in some cases, be lower operating costs. The literature also recognizes that the negative consequences of noncompliance—such as those stemming from lawsuits, bad publicity, and the value of the product lost from accidents—can deter noncompliance along with regulatory agency oversight. Thus, for example, the expected costs of a legal settlement could overshadow the lower operating costs expected from noncompliance, and noncompliance might be deterred. Mr. Chairman, this concludes my prepared statement. We expect to report more fully on these and other issues in our report that we expect to issue later this week. We also anticipate making recommendations to improve OPS’s ability to demonstrate the effectiveness of its enforcement strategy and to improve OPS’s and FAA’s management controls over the collection of civil penalties. I would be pleased to respond to any questions that you or Members of the Subcommittee might have. For information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony are Jennifer Clayborne, Judy Guilliams- Tapia, Bonnie Pignatiello Leer, Gail Marnik, James Ratzenberger, and Gregory Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Interstate pipelines carrying natural gas and hazardous liquids (such as petroleum products) are safer to the public than other modes of freight transportation. The Office of Pipeline Safety (OPS), the federal agency that administers the national regulatory program to ensure safe pipeline transportation, has been undertaking a broad range of activities to make pipeline transportation safer. However, the number of serious accidents--those involving deaths, injuries, and property damage of $50,000 or more--has not fallen. When safety problems are found, OPS can take enforcement action against pipeline operators, including requiring the correction of safety violations and assessing monetary sanctions (civil penalties). This testimony is based on ongoing work for the House Committee on Energy and Commerce and for other committees, as required by the Pipeline Safety Improvement Act of 2002. The testimony provides preliminary results on (1) the effectiveness of OPS's enforcement strategy and (2) OPS's assessment of civil penalties. The effectiveness of OPS's enforcement strategy cannot be determined because the agency has not incorporated three key elements of effective program management--clear program goals, a well-defined strategy for achieving goals, and performance measures that are linked to program goals. Without these key elements, the agency cannot determine whether recent and planned changes in its strategy will have the desired effects on pipeline safety. Over the past several years, OPS has focused primarily on other efforts--such as developing a new risk-based regulatory approach--that it believes will change the safety culture of the industry. OPS has also became more aggressive in enforcing its regulations and now plans to further strengthen the management of its enforcement program. In particular, OPS is developing an enforcement policy that will help define its enforcement strategy and has taken initial steps toward identifying new performance measures. However, OPS does not plan to finalize the policy until 2005 and has not adopted key practices for achieving successful performance measurement systems, such as linking measures to goals. OPS increased both the number and the size of the civil penalties it assessed against pipeline operators over the last 4 years (2000-2003) following a decision to be "tough but fair" in assessing penalties. OPS assessed an average of 22 penalties per year during this period, compared with an average of 14 per year for the previous 5 years (1995-1999), a period of more lenient "partnering" with industry. In addition, the average penalty increased from $18,000 to $29,000 over the two periods. About 94 percent of the 216 penalties levied from 1994 through 2003 have been paid. The civil penalty is one of several actions OPS can take when it finds a violation, and these penalties represent about 14 percent of all enforcement actions over the past 10 years. While OPS has increased the number and the size of its civil penalties, stakeholders--including industry, state, and insurance company officials and public advocacy groups--expressed differing views on whether these penalties deter noncompliance with safety regulations. Some, such as pipeline operators, thought that any penalty was a deterrent if it kept the pipeline operator in the public eye, while others, such as safety advocates, told us that the penalties were too small to be effective sanctions.
The legal framework for addressing and paying for maritime oil spills is established by OPA, which places the primary burden of liability and the costs of oil spills on the owner and operator of the vessel or onshore facility and the lessee or permittee of the area in which an offshore facility is located. This “polluter pays” framework requires that the responsible party or parties assume the burden of spill response, natural resource restoration, and compensation to those damaged by the spill, up to a specified limit of liability. In general, the level of potential exposure under OPA depends on the kind of vessel or facility from which a spill originates and is limited in amount unless, among other reasons, the oil discharge is the result of gross negligence or willful misconduct or a violation of federal operation, safety, and construction regulations, in which case liability under OPA is unlimited. Subject to certain exceptions, such as removal cost claims by states, all nonfederal claims for OPA-compensable removal or damages must be submitted first to the responsible party or the responsible party’s guarantor. If the responsible party denies a claim or does not settle it within 90 days, a claimant may present the claim to the federal government to be considered for payment. OPA authorizes use of the Fund, subject to limitations on the amount and types of costs, to pay specified damage claims above a responsible party’s liability limit, to pay damage claims or removal costs when a responsible party does not pay or cannot be identified, and to pursue reimbursement from the responsible party for oil removal and damage claims paid by the Fund. Under OPA, the amount that may be paid from the Fund for one incident is limited to $1 billion. Further, within the $1 billion cap, the costs for conducting a natural resource damage assessment and damages paid in connection with any single incident cannot exceed $500 million. OPA defines the costs for which responsible parties are liable and for which the Fund is made available for compensation in the event that the responsible party does not pay, cannot pay, or is not identified. “OPA- compensable” costs include two main types: damage claims and oil removal. OPA-compensable damages cover a wide range of both actual and potential adverse impacts from an oil spill. For example, damages from an oil spill include the loss of profits to the owner of a commercial charter boat if the boat was trapped in port because the Coast Guard closed the waterway in order to remove the oil, or personal property damage to the owner of a recreational boat or waterfront property that was damaged by oil from the spill, for which a claim may be made first to any of the responsible parties, then to the Fund. (See table 1.) Oil removal costs are incurred by the federal government or any other entity when responding to, containing, and cleaning up a spill. For example, removal costs include cleaning up adjoining shoreline affected by the oil spill and the equipment used in the response—skimmers to pull oil from the water, booms to contain the oil, planes for aerial observation—as well as salaries, travel, and lodging costs for responders. Individual and business claimants may seek reimbursement from the Fund for damages caused by an oil spill by submitting a claim to NPFC. In general, if a responsible party is identified, the claimant must first submit the claim to the responsible party. If the responsible party is unable or unwilling to pay the claim within 90 days of submission, the claimant may then elect to submit the claim to NPFC for adjudication or pursue a lawsuit against the responsible party. Certain circumstances exist where a claimant may submit a claim to NPFC without first submitting it to the responsible party. These include instances where (1) NPFC advertises that claimants may submit claims directly to the Fund, (2) NPFC notifies claimants in writing that they may submit claims directly to the Fund, (3) a responsible party submits a claim for costs incurred beyond its liability, (4) the governor of a state submits a claim for removal costs incurred by the state, and (5) a U.S. claimant submits a claim to the Fund when a foreign offshore unit has discharged oil causing damage for which the Fund is available. Once NPFC receives a claim, NPFC staff conduct an initial review to reasonably assure basic regulatory compliance. The claim is assigned to a claims manager who reviews the claim to reasonably assure it is payable under OPA, has not been paid by the responsible party, and is under the $1 billion per-incident cap. As a part of the adjudication process, any claim payment over $100,000 is sent to the Coast Guard Judge Advocate General’s Office of Claims and Litigation for review. If NPFC agrees to pay the claim, the claimant is notified and has 60 days to accept the offer. After acceptance of the offer, NPFC forwards payment information to the Coast Guard’s Finance Center to be processed. If the claim is denied, NPFC sends the claimant the reason for denial and advises that within 60 days, the claimant can resubmit the claim for reconsideration. (See fig. 1 for an illustration of the claim process.) Responding to oil spills involves a coordinated effort by various parties, including (1) the Coast Guard or the Environmental Protection Agency (EPA) as Federal On-Scene Coordinator (FOSC); (2) federal, state, local, and Indian tribal government agencies; (3) private companies that specialize in oil spill cleanup; and (4) the responsible parties, their guarantors, and qualified individuals designated by responsible parties to respond to oil spills. To fund government agencies’ oil spill removal costs, the FOSC issues authorizations to quickly obtain services and assistance from government agencies and private companies, verifies that the services or goods were received and are consistent with the National Oil and Hazardous Substances Pollution Contingency Plan, commonly called the National Contingency Plan, and certifies the supporting documentation. The FOSC then forwards the contractor invoice or Military Interdepartmental Purchase Request documentation to the contracting officer at the Shore Infrastructure Logistics Center (SILC) and the Pollution Removal Funding Authorization documentation to the case officer at NPFC for review and authorization to pay. Once payment is authorized, the Coast Guard’s Finance Center pays the government agencies and private companies. (See fig. 2 for an illustration of the payment process related to oil spill removal costs.) In conducting a coastal oil spill response, the lead federal authority, or FOSC, is usually the nearest Coast Guard Sector and is headed by the Coast Guard captain of the port. When notice of an oil spill is received by the Coast Guard, and as soon as the source is identified, NPFC must notify the responsible party or parties of NPFC’s designation. According to NPFC guidance, “the has primary responsibility for response to a spill incident, including setting up the [Incident Command System] and joining with the FOSC and state on-scene coordinator (SOSC) in the .” However, as reflected in the National Contingency Plan, NPFC guidance explains that “even when the responsible party leads a reasonable response effort, the FOSC is always in ultimate command and may decide to direct specific action or, for whatever reason it is deemed necessary, actually take the lead role in the response.” If there is a potential for claims activity, the NPFC will issue a Notice of Designation to the responsible party requiring advertisement to potential injured parties to advise them of their rights to file claims. If the responsible party is unknown or fails to take action, NPFC will advertise and accept claims for adjudication. The Fund is divided into two major components: the Emergency Fund and the Principal Fund. The Emergency Fund constitutes $50 million the President may make available each year to cover immediate expenses associated with mitigating the threat of an oil spill, costs of oil spill containment, countermeasures, and cleanup and disposal activities, as well as to pay for other costs to initiate natural resource damage assessments. The $50 million is transferred annually from the Principal Fund to the Emergency Fund. Amounts made available remain available until expended. The Principal Fund is used to provide funds for claims, such as natural resource damage claims, loss of profits and earning capacity claims, and loss of government revenues. Congress has appropriated money from the Principal Fund to certain agencies, such as the Coast Guard, EPA, and the Department of the Interior—each of which has received an annual appropriation from the Fund to cover administrative, operational, personnel, and enforcement costs. Congress appropriated the following from the Fund for fiscal year 2014: Department of Transportation, Pipeline and Hazardous Materials Safety Administration: $18.573 million Coast Guard: $45.0 million Department of the Interior, Bureau of Safety and Environmental Enforcement: $14.899 million EPA’s Inland Oil Spills Programs: $18.209 million Department of the Treasury, Bureau of the Fiscal Service: $165,000 The Fund is required annually to provide funds to the Denali Commission and the Prince William Sound Oil Spill Recovery Institute. Specifically, section 8102 of OPA provided for the eventual transfer of the remainder of the balance in the Trans-Alaska Pipeline Liability Fund to the Fund. The Omnibus Consolidated and Emergency Supplemental Appropriations Act, 1999 provided that the interest produced from the investment of the Trans-Alaska Pipeline Liability Fund shall be transferred annually to the Denali Commission for a program to repair or replace bulk fuel storage tanks in Alaska. In fiscal year 2014, the Fund transferred $6.5 million to the Denali Commission. Similarly, OPA established the Prince William Sound Oil Spill Recovery Institute, in part, to identify and develop the best available techniques, equipment, and materials for dealing with oil spills in the Arctic and sub-Arctic marine environments. The institute’s annual funding is paid by the Fund based on interest earned on a $35.3 million trust, which is held by the Department of the Treasury. In fiscal year 2014, the institute received $854,833. The Fund’s primary revenue source is an 8-cent per-barrel tax on both domestically produced and imported petroleum products. Another significant source of revenue has been transfers from other existing pollution funds. OPA consolidated into the Fund the liability and compensation requirements of certain prior federal oil pollution laws and their supporting funds, including the Federal Water Pollution Control Act, Deepwater Port Act, Trans-Alaska Pipeline System Authorization Act, and Outer Continental Shelf Lands Act. Total transfers into the Fund since 1990 have exceeded $550 million. However, no additional funds from these sources remain. Other revenue sources include recoveries from responsible parties for costs of removal and damages, fines and penalties paid pursuant to various statutes, and interest earned on the Fund’s U.S. Treasury investments. (See fig. 3.) NPFC utilizes three different policy and procedural guides as part of its internal control framework over the damage claim and oil removal process. The policy and procedural guides include (1) the 2011 Standard Operating Procedures of the Claims Adjudication Division, which contains the policies and procedures related to the damage claim process; (2) the 2007 Case Management Division Standard Operating Procedures, which contains policies and procedures related to the oil removal process, and to a lesser extent, the damage claim process; and (3) the NPFC User Reference Guide, which is a reference tool for Coast Guard and EPA FOSCs. Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control across the federal government and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Standards for Internal Control in the Federal Government states that internal controls comprise the plans, methods, and procedures used to meet missions, goals, and objectives. To achieve this, management is responsible for developing the detailed policies, procedures, and practices to fit the agency’s operations and to reasonably assure that they are built into and are integral to operations. NPFC has implemented a system of internal controls over damage claim and oil removal disbursements. For damage claim disbursements, we did not identify any deficiencies in the design and implementation of the controls we tested. However, our review of a statistical sample of oil removal disbursements identified internal control deficiencies that were caused by design deficiencies or by staff not adhering to certain key controls as designed. These included missing invoice certifications, missing supporting cost documentation, and high visibility spills not identified. Our review also identified other deficiencies in the design of controls related to oil removal disbursements. These include the lack of policies and procedures for taking advantage of vendor discounts, for ensuring that document retention policies are consistently followed, and for EPA disbursements. Our testing of the 27 selected high dollar damage claim disbursements, which accounted for 93 percent of the total damage claim disbursements for fiscal years 2011 through 2013, found that the design and implementation of relevant key controls provide reasonable assurance that damage claim expenses are appropriately disbursed. For example, our tests of 18 key controls included determining if a claim was submitted within the 3-year statutory period and if NPFC’s Legal Division and Office of Claims and Litigation reviewed the claim. We did not identify any deficiencies during our testing of the 27 damage claim disbursements. In testing a stratified random sample of 200 oil removal disbursements each valued at less than or equal to $500,000, we found that 9 of the 12 controls tested were effectively designed and implemented, while 1 control was effectively designed but was not effectively implemented and 2 controls were not effectively designed or implemented. Specifically, these internal controls were not effective for (1) certifying invoices, (2) maintaining supporting documentation, and (3) indicating whether an oil spill is classified as a high visibility oil spill. Based on the results of our stratified random sample, we estimate that $2.5 million from the population of $108 million in oil removal disbursements each valued less than or equal to $500,000 made during fiscal years 2011 through 2013 could contain one or more of the following control deficiencies, increasing the risk of improper payments from the Fund. In responding to oil spills, the FOSC has available both private contractors and government agencies to provide an appropriate response. For example, if a cleanup contractor is required, the FOSC would place an order for the cleanup contractor with a delivery order under the Basic Ordering Agreement administered by SILC. The contracting officer issues the order and a contract for the necessary services. A copy of the FOSC’s documentation is provided to NPFC as documentation of these expenses. Government agencies can also be called upon to provide services during a spill response. The FOSC monitors the performance of the contractors and government agencies, reporting on progress via periodic pollution reports. When oil removal services are completed, the contractor or federal agency provides documentation to the FOSC. The FOSC reviews the documentation and certifies that services have been received. The FOSC then forwards the documentation to SILC’s contracting officer or NPFC’s case officer, as appropriate. The documentation is reviewed and payments are authorized. Certification of invoices is an important internal control as it reduces the risk of processing ineligible invoices. However, during testing of our stratified random sample of oil removal disbursements, we identified five oil removal disbursements that lacked FOSC certification. Two of these five disbursements were made to EPA and the certifications were not requested by NPFC. The remaining three certifications could not be located. In addition, during our testing of the 61 high dollar oil removal disbursements over $500,000, we identified two additional disbursements that lacked FOSC certification. Standards for Internal Control in the Federal Government states that control activities, such as verifications, should be effective and efficient in accomplishing the agency’s control objectives. The Coast Guard and EPA entered into a memorandum of understanding dated June 11, 2012, stating that the EPA FOSC shall review all costs incurred during the removal operation and certify that they are proper and consistent with the National Contingency Plan. The National Contingency Plan states that during all phases of response, the lead agency (the Coast Guard or EPA) shall complete and maintain documentation to support all actions taken under the plan and to form the basis for cost recovery. It also designates the FOSC to coordinate and direct responses. In addition, NPFC’s 2007 Case Management Division Standard Operating Procedures states, “As the services are provided, the FOSC certifies that the services were received and are consistent with the National Contingency Plan then certifies eligibility for reimbursement.” NPFC officials provided various explanations for the missing certifications. The officials stated that (1) for three of the five invoices from our stratified random sample of oil removal disbursements, NPFC could not produce certified invoices because the invoices were likely filed incorrectly; (2) for the remaining two disbursements from the stratified random sample, which were EPA disbursements, the invoices were already being processed when the memorandum of understanding between the Coast Guard and EPA was signed; and (3) for the two high dollar disbursements tested, NPFC officials stated that the contracting officer had firsthand knowledge of the receipt of services, so it was not necessary to rely on the FOSC’s certification. SILC officials further stated that the contracting officer is not bound by the 2007 Case Management Division Standard Operating Procedures because, under the Federal Acquisition Regulation (FAR), acceptance of services is the responsibility of the contracting officer. However, the FAR also states that acceptance generally constitutes acknowledgment that the supplies or services conform with applicable contract quality and quantity requirements, and when this responsibility is assigned to a cognizant contract administration office or to another agency, acceptance by that office or agency is binding on the government. Because the Case Management Standard Operating Procedures assign the responsibility to the FOSC to certify the receipt of goods, the FOSC should have certified the disbursements. In addition, the 2007 Case Management Division Standard Operating Procedures has guidance and related checklists that include obtaining FOSC certification. Processing invoices that lack FOSC certification puts NPFC at risk of improper payments. For instance, a payment could be made for services or supplies that the FOSC did not authorize. Although NPFC has established policies and procedures, the documentation issues identified demonstrate that management has not reasonably assured that the policies and procedures are consistently followed. We found three oil removal disbursements in our stratified random sample that did not include appropriate supporting documentation. Specifically, NPFC was unable to provide two travel orders and a contract invoice to support three oil removal disbursements. These were in addition to the three FOSC-certified invoices NPFC could not provide, as discussed earlier. According to NPFC staff, the documentation was not included in the case file because it was likely filed incorrectly. We found that NPFC did not have policies and procedures requiring supervisory review of the filing process. Having policies and procedures that require periodic checks of the files could provide reasonable assurance that documentation is properly maintained. According to NPFC’s 2007 Case Management Division Standard Operating Procedures, cost documentation, including contract invoices and travel orders, should be maintained. In addition, Standards for Internal Control in the Federal Government provides that control activities should be effective and efficient in accomplishing the agency’s control objectives. Specifically, transactions should be clearly documented, the documentation should be readily available for examination, and supervisory activity should occur in the course of operations. The lack of documented transactions could lead to the payment of unauthorized transactions or payment for the wrong amounts. Our testing of the stratified random sample of oil removal disbursements identified five oil spills each of which had a total cost over $5 million and was not identified in NPFC’s Case Information Management System (CIMS) as high visibility spills. Per NPFC’s 2007 Case Management Division Standard Operating Procedures, the identification of a spill as high visibility prompts NPFC to incorporate additional oversight procedures for the oil spill. The additional procedures are necessary to provide the careful consideration required while reviewing a high visibility case. CIMS was designed to allow NPFC case managers to identify any case over $5 million as high visibility. In addition to the five oil spills found in our stratified random sample that were not identified in CIMS as high visibility oil spills, we identified two additional oil spills during testing of the high dollar oil removal disbursements that were not identified in CIMS as high visibility oil spills. NPFC’s 2007 Case Management Division Standard Operating Procedures states that among other criteria, any case with a ceiling higher than $5 million should be identified as a high visibility case. NPFC’s practice has changed from using the high visibility identifier, which is specified in the current policy, to routinely discussing all high visibility cases of any value during weekly staff meetings with high-level officials, including the Director, Deputy Director, and the Legal Division Chief. The details of the weekly staff meetings are documented in the meeting minutes and are posted to NPFC’s internal website; notification of recent meeting minutes is sent to all staff via e-mail. In response to our inquiry, NPFC officials stated that they plan to eliminate the requirement to identify cases as high visibility in CIMS as the identification of high visibility oil spills take place during the weekly staff meetings where the high visibility oil spills are discussed. According to Standards for Internal Control in the Federal Government, control activities should be effective and efficient in accomplishing the agency’s control objectives. Specifically, internal controls need to be clearly documented and properly managed and maintained to reasonably assure relevance. Following a practice that differs from policy may cause NPFC to inconsistently monitor high visibility spills and miss identifying or reacting to other important events. We found that for three disbursements from the high dollar sample, SILC and the Finance Center did not take the discount for early payment offered by the vendors. Some vendors offer cash discounts on the amount owed when a customer pays within specified time frames as a means of encouraging faster payment. However, we found that SILC did not consistently identify available discounts and the Finance Center did not consistently take available discounts. Specifically, in one instance SILC identified the discount terms and the Finance Center paid the vendor within the time frame required to receive the discount, but the Finance Center did not take advantage of the discount when processing the payment. Finance Center officials stated that the discount was not taken because of personnel oversight. We also found two other disbursements where SILC did not identify the discount terms offered by the vendor so the discounts were not taken. These occurred for two reasons. One, we found that SILC does not have documented policies and procedures for reasonably assuring that available discounts are identified. Second, we found that although the Finance Center has established procedures for the appropriate processing of available vendor discounts, it does not have a mechanism to reasonably assure that its procedures are followed. SILC’s lack of policies and procedures for identifying discounts and the Finance Center’s failure to follow documented procedures increases the risk that they will not take advantage of opportunities to save the government money. In addition, while testing the stratified random sample, we found that SILC made an overpayment to a vendor for one disbursement. Specifically, a vendor presented a charge on an invoice for Pollution Control Services to a contracting officer in SILC that contained a math error; the charge had a corresponding administrative fee of 10 percent. The contracting officer identified the math error and correctly reduced the charge but failed to also reduce the related administrative fee. We found that SILC does not have documented policies and procedures to reasonably assure that invoice amounts are correctly calculated. The lack of such policies and procedures increases the risk of overpayments. According to Standards for Internal Control in the Federal Government, control activities should be effective and efficient in accomplishing the agency’s control objectives. Specifically, policies and procedures should be clearly documented to reasonably assure stewardship of government resources. According to the Commercial Payables Branch, Commercial Payments Section Contracts Desk Guide, the Department of Homeland Security advises that any component that has earned a discount must take the discount unless it is not advantageous to do so. In addition, according to Office of Management and Budget (OMB) Circular No. A- 123, an improper payment is any payment that should not have been made or that was made in an incorrect amount. Incorrect amounts are overpayments or underpayments that are made to eligible recipients. The failure to take the discount when the payment was made early and the overpayment to the vendor for the incorrect administrative fee meet the definition of improper payments. In January 2013, the Coast Guard and EPA updated their existing June 2012 memorandum of understanding for use of the Fund in an appendix to the memorandum for the provision of cash advances. Per the appendix, EPA requests a cash advance based upon paid and pending invoices for oil removal activities and NPFC validates the documentation supporting the requested amount. The advance is then approved and forwarded to the Finance Center for payment. Subsequently, the appendix requires EPA to submit oil removal cost documentation to an NPFC case officer for review and approval. The information is then forwarded to the Finance Center, where the amount is taken against the advance. According to NPFC officials, EPA is the only agency to which NPFC advances funds, and EPA requested this process change because of cash flow constraints. Our review found that NPFC does not maintain the cost documentation that supports the EPA cash advances. In addition, NPFC does not verify amounts supporting the requested cash advance. According to NPFC officials, the cost documentation, which is a summary of expenses, is not maintained because of the large size of EPA’s submitted files, and NPFC does not verify the amounts contained in the summary of expenses because NPFC does not obtain detailed support, such as invoices, for the EPA summary. NPFC has not developed policies and procedures for providing cash advances from the Fund that include tracking and maintaining supporting documentation for the amounts advanced, reconciling amounts advanced to amounts expensed, and providing approval to the Finance Center to liquidate the advances. Without such policies and procedures to reasonably assure that the key control activities over cash advances are performed, the risk of improperly processing transactions, such as overpaying EPA, is increased. Standards for Internal Control in the Federal Government states that control activities should be effective and efficient in accomplishing the agency’s control objectives. Specifically, internal controls need to be clearly documented and properly managed and maintained to reasonably assure relevance. Further, all documentation should be properly managed and maintained, and the documentation should be readily available for examination. NPFC has established a system of internal controls over designation and billing of responsible parties for damage claim and oil removal disbursements that are over $500,000. Through our testing of internal controls for the designation and billing of disbursements, we did not identify any deficiencies with the design and implementation of internal controls in this area. We identified and tested the key internal controls for the 27 selected unreimbursed high dollar damage claim disbursements for fiscal years 2011 through 2013. We tested and confirmed that when a responsible party was found liable, NPFC case officers entered the responsible party’s identifying information, including a valid name and address, into NPFC’s case management system. We determined that the internal controls provided reasonable assurance that responsible parties were designated and billed, as appropriate, for damage claim disbursements that are over $500,000. We identified and tested key internal controls for the 61 selected unreimbursed high dollar oil removal disbursements for fiscal years 2011 through 2013. We tested and confirmed that when a responsible party was found liable, the FOSC forwarded the responsible party’s identifying information to NPFC, and that NPFC case officers entered the responsible party’s identifying information, including a valid name and address, into NPFC’s case management system. We determined that the internal controls provided reasonable assurance that the responsible parties were designated and billed, as appropriate, for the oil removal disbursements that are over $500,000. NPFC disbursed in total over $360 million from the Fund for damage claim and oil removal costs in fiscal years 2011 through 2013. During this same period, NPFC billed in total $272 million to responsible parties, and collected in total $39 million. For certain incidents, the Fund was not fully reimbursed, which was appropriate in the circumstances. We found that NPFC was unable to bill for a large percentage of high dollar claim disbursements because either the responsible parties had reached their limit of liability or the spills were classified as mystery spills. In addition to the collections from billed responsible parties, the Fund is primarily funded by an 8-cent per-barrel tax that increases to 9 cents a barrel in 2017 and expires on December 31, 2017. As the Fund may not be fully reimbursed for damage claim and oil removal costs, the per-barrel excise tax is the only consistent source of funding for the Fund, as discussed later in this report. Although the balance of the Fund was $4.6 billion as of September 30, 2014, the loss of the funding source and the potential for future spills and the cost of their associated cleanup contribute to uncertainty regarding the sufficiency of the funding sources for the Fund in the future. For fiscal years 2011 through 2013, $146 million was disbursed from the Fund in total for damage claims. There were 409 damage claim disbursement transactions during this period. In addition, for fiscal years 2011 through 2013, NPFC disbursed approximately $214 million in total for oil removal. There were 11,188 oil removal disbursement transactions during this period. (See table 2.) As shown in table 3, NPFC sent bills to responsible parties totaling $272 million and collected $39 million in total for fiscal years 2011 through 2013 for both damage claim and oil removal disbursements. It is important to note that collections are for the period indicated and are not necessarily tied to billings made in the same period. Because of the unique nature of each spill, cycle times vary for when amounts are disbursed, when bills are sent, and when payments are collected. For example, in March 2011 an oil spill occurred offshore of Louisiana that was determined to have been caused by a company plugging subsea wells. NPFC paid expenses associated with the spill from April 2011 through May 2012 and then sent bills to the responsible party for the expenses in May and June 2012. The responsible party made multiple payments from August through November 2012. As such, the cycle time for this oil spill was 20 months and spanned 3 different fiscal years. According to NPFC staff, typically the larger the spill, the longer the cycle is. The high dollar damage claim disbursements we tested for fiscal years 2011 through 2013 included damage claim disbursements for spills that occurred as early as 2004. We identified 95 oil removal disbursements and 27 damage claim disbursements during the period that exceeded $500,000. As discussed previously, of these, 61 oil removal disbursements and 27 damage claim disbursements during the period were not fully reimbursed. These disbursements ranged from a damage claim of $505,084 for the F/V Milky Way sinking, which was located in Washington State, to a damage claim of $20,257,121 for the T/V Athos I oil spill, which affected Delaware, New Jersey, and Pennsylvania. Certain spills had multiple disbursements over $500,000 during this period. For instance, the M/V Jireh oil spill, which was located in Puerto Rico, resulted in 14 oil removal disbursements over $500,000 in fiscal years 2011 through 2013, and the M/V Selendang Ayu oil spill, which affected Alaska, resulted in 13 damage claim disbursements over $500,000. More information on the individual disbursements over $500,000 that were not fully reimbursed is presented in appendixes II and III. We analyzed unreimbursed high dollar disbursements for fiscal years 2011 through 2013, which totaled $201 million, and determined that 79 percent or $158 million will most likely not be reimbursed because the responsible party had reached its limit of liability or the spill source could not be determined. The following examples illustrate circumstances when the Fund will not be reimbursed for all expenses it incurs because (1) the responsible party reached its legal liability limit on paying for damage claims or oil removal costs, (2) not all elements of liability were established, or (3) a responsible party could not be determined. M/V Selendang Ayu. On December 8, 2004, the M/V Selendang Ayu cargo ship ran aground off Unalaska Island in western Alaska’s Aleutian Islands after its engine failed, resulting in a large oil spill. The company (responsible party) that owned the ship assumed responsibility for the spill and worked with the Coast Guard and state of Alaska to address the spill, including directly paying the oil removal costs and damage claims associated with the spill. The company was not found to be grossly negligent, so its liability under OPA was capped at $24 million. The response costs and damage claims paid by the company totaled $149 million. The company filed a damage claim request with NPFC for approximately $125 million, which comprised the $149 million in total costs and damages minus the $24 million liability cap. Of these damage claims, $88 million was found compensable and reimbursed from the Fund in fiscal years 2012 through 2013. T/V Athos I. The T/V Athos I departed Venezuela for the Citgo Asphalt Refinery in Paulsboro, New Jersey, on November 20, 2004, carrying approximately 13 million gallons of crude oil. On November 26, 2004, tug operators assisting the T/V Athos I with docking at the refinery notified the Coast Guard that the tanker was leaking oil into the Delaware River. The vessel had struck several submerged objects while maneuvering to its berth, including an 18,000-pound anchor. The Coast Guard determined that the anchor punctured the vessel’s number seven center cargo and port ballast tanks, allowing oil to spill into the river. The Coast Guard estimated that 263,371 gallons had spilled into the Delaware River. As of September 30, 2013, oil removal costs disbursed by the Fund were $47.6 million and damage claims disbursed by the Fund were $162.7 million. The Coast Guard determined in 2006 that the responsible party had reached its liability of $45.5 million under OPA, which means the responsible party had paid oil removal claims, damage claims, or both that equaled or exceeded its liability limit, and therefore the Fund will be not reimbursed for any further expenses. Evolution Petrol Corp. An oily sheen was discovered on a creek in Louisiana on August 1, 2007. The EPA FOSC determined on or about August 2, 2007, that the oily sheen was from Evolution Petrol Corp saltwater tanks. EPA explained that if Evolution did not take responsibility, EPA would hire contractors for cleanup, which could be more expensive, and that Evolution could be subject to penalties of up to $32,500 per day. Evolution chose to accept responsibility and hired a contractor to handle the cleanup. Evolution received over $777,000 in reimbursement for the cleanup from its insurance company. The insurance company presented a claim of approximately $715,000 to the Fund for reimbursement of the oil removal costs based on its analysis that Evolution was not responsible for the spill. NPFC determined that based on its analysis and evidence provided, the oily sheen did not originate at the Evolution facility. The Fund paid the insurance company approximately $696,000 in fiscal year 2011 and because no liability was established, the Fund will not be reimbursed for its expenses. S.S. Montebello. On December 23, 1941, the S.S. Montebello was torpedoed by a Japanese submarine off the coast of Cambria, California, sinking the 8,272 ton tanker carrying 3 million gallons of crude oil that may still have been in its holds. On December 2, 2010, the FOSC determined that there was a substantial threat of a discharge of oil in part due to the reported volume of oil carried by the vessel and the potential damage that a release might have on the marine ecosystem in the surrounding area. In September 2011, the Coast Guard awarded a contract to Global Diving and Salvage, Inc. to conduct a survey to determine the intensity and immediacy (of the threat) and to develop further courses of action for removal if necessary. The survey was conducted in October 2011, at which time a Unified Command led by the Coast Guard and California Department of Fish and Game’s Office of Spill Prevention and Response assessed cargo and fuel tanks of the sunken ship. The Unified Command determined that there was no substantial oil threat from the S.S. Montebello to California waters and shorelines. The cost of the project was $3.2 million. These removal costs were not billed because there was not a responsible party. Mystery spills. The FOSC is unable to determine the source of these spills, so a responsible party cannot be identified. During fiscal years fiscal years 2011 through 2013, NPFC made about $16 million ($2.3 million in damage claims and $13.9 million in oil removal) in disbursements for mystery spills. These disbursements are for mystery spills that occurred during fiscal years 2004 through 2013. Uncertainties exist regarding the primary revenue source of the Fund, an 8-cent per-barrel tax on petroleum products. This tax is set to expire in 2017. If the Fund’s primary source of revenue expires, this could affect future oil spill response and may increase risk to the federal government. As discussed above, the Fund at times is unable to bill and collect reimbursements from responsible parties. The Fund enables the Coast Guard and EPA to respond to oil spills as the Fund can be used to cover expenses associated with mitigating the threat of an oil spill as well as the costs associated with containment, countermeasures, and cleanup and disposal activities. During fiscal year 2014, NPFC reported 408 oil spills; the Coast Guard and EPA responded to 324 of these. In the remaining 84 cases, the claimants sustained damages and directly submitted claims to NPFC. The per-barrel tax was increased and extended by a provision of the Energy Improvement and Extension Act of 2008, through December 31, 2017. The act also eliminated a restriction on the growth of the balance of the Fund beyond $2.7 billion. As shown in figure 4, the Fund’s revenue sources include per-barrel tax, interest earned on the Fund’s investments in Department of the Treasury securities, fines and penalties paid pursuant to various statutes, and cost reimbursements from responsible parties for costs of removal and damages. The average amount of the Fund’s revenue from the per-barrel tax was 60 percent of the total revenue for fiscal years 2011 through 2013. The Fund’s balance has increased over the years, as shown in figure 5. The significant increase in the balance from fiscal years 2012 through 2013 is primarily the result of the two judgments that resulted in assessments of fines to BP PLC and Transocean Ltd. for the Deepwater Horizon oil spill that collectively totaled approximately $1.3 billion. Although the Fund’s balance was about $4.6 billion as of September 30, 2014, the potential for large spills exists, and if a responsible party is unwilling, unable, or not required to pay, the Fund will be needed to pay for the cleanup, including removal costs and damage claims. As previously discussed, the costs and claims from oil spills can continue for a number of years depending on the circumstances, and a significant amount of disbursements from the Fund are not fully reimbursed for various reasons. The President’s fiscal year 2016 budget request included a proposal to increase the excise tax on each barrel of oil produced domestically or imported by 1 cent, to a total of 9 cents per barrel for January 1, 2016, through December 31, 2016, and by another cent to a total of 10 cents per barrel starting January 1, 2017. The President’s budget request did not include an extension of the tax past December 31, 2017. Bills have been introduced in recent sessions of Congress that included provisions to extend the excise tax beyond 2017. Without such an extension, the primary source of revenue for the Fund will cease to exist after 2017. Hundreds of oil spills occur annually on U.S. land and in U.S. coastal waters. NPFC has an opportunity to improve its internal controls for processing oil removal disbursements by developing and updating policies and procedures. Improving its internal controls contributes to reasonably assuring that the Fund is used efficiently and effectively to pay for oil spill cleanup costs and damage claims. Because the Fund has disbursed more funding than it has been able to recover, its primary source of funding has been the per-barrel oil tax. However, the per-barrel oil tax is set to expire in 2017, creating uncertainty with regard to future funding. Given this, it will be important for Congress to determine what mechanism it would like to rely on to provide sustained funding for the Fund. Congress should consider the options for sustaining the Oil Spill Liability Trust Fund as well as the optimal level of funding to be maintained in the Fund, in light of the expiration of the Fund’s per-barrel tax funding source in 2017. We recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following four actions to improve the design and implementation of NPFC’s internal controls over Fund disbursements. Develop and implement a plan to reasonably assure that NPFC staff comply with invoice certification policies and procedures. Develop and implement policies and procedures for reasonably assuring consistent supervisory oversight of the filing process related to transaction documentation. Update NPFC’s high visibility oil spill policy to reasonably assure that it reflects management’s current practice of weekly meetings to identify and discuss high visibility oil spills. Develop policies and procedures for processing of cash advances from the Fund, covering processes for (1) tracking the amounts advanced, (2) reconciling amounts advanced to amounts spent, (3) providing approval to the Finance Center to liquidate an advance, and (4) maintaining supporting documentation. We recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following two actions to improve the design and implementation of SILC’s internal controls over Fund disbursements. Develop and implement policies and procedures related to identifying available vendor discounts. Develop and implement policies and procedures to reasonably assure that all amounts presented on an invoice are calculated correctly. We also recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to develop and implement a mechanism for the Finance Center to reasonably assure that its procedures for processing available discounts related to Fund disbursements are followed. We provided the Department of Homeland Security with a draft of this report for review and comment. In written comments, reprinted in appendix IV, the Department of Homeland Security concurred with our recommendations and described actions taken or planned to address each recommendation. The Department of Homeland Security also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The Coast Guard Authorization Act of 2010 included a provision for GAO to conduct an audit of Oil Spill Liability Trust Fund (Fund) disbursements. This report examines the extent to which (1) the National Pollution Funds Center (NPFC) has designed and implemented internal controls over damage claim and oil removal disbursements to reasonably assure that amounts are appropriately disbursed from the Fund; (2) NPFC has designed and implemented internal controls to reasonably assure that responsible parties are designated and billed, as appropriate, for disbursements from the Fund that are over $500,000; and (3) the Fund was reimbursed for damage claim and oil removal costs in fiscal years 2011 through 2013. We also report on the Fund’s reliance on the per- barrel oil tax, which is its primary source of revenue. We excluded information about the Deepwater Horizon oil spill, as we had previously conducted work on that specific spill. In addition, it is the only spill of national significance to occur since the Oil Pollution Act of 1990 (OPA) passed, and its size and cost would have skewed our analysis. We obtained NPFC’s damage claim and oil removal disbursement data for fiscal years 2011 through 2013 and performed procedures to determine whether the data were reliable enough for our purposes. Specifically, we interviewed knowledgeable agency officials about the quality control procedures the agency had in place when collecting and creating the data and electronically tested the data for unusual items. Based on the results of these procedures, we determined that the data were reliable enough for our purposes. We used these data to identify 27 damage claim disbursements and 95 oil removal disbursements for fiscal years 2011 through 2013 that exceeded $500,000. Of these 122 disbursements, we selected and tested all disbursements that were not fully reimbursed, resulting in a total of 88 disbursements for both damage claim and oil removal disbursements, which is further explained below. To determine whether the disbursements were reimbursed, we interviewed NPFC staff about the quality control procedures the agency had in place when collecting and creating the data and reviewed the billings to the responsible parties and the subsequent collections. Based on our interviews with the NPFC staff and analyzing of the billings and collection data, we determined that the billing and collection data were reliable enough for our purposes. NPFC sends the responsible party an itemized bill containing direct and indirect costs. The itemized bill typically contains multiple individual disbursements. The responsible party may not always pay 100 percent of the amount billed, but any collection is applied to the total amount billed. Since amounts collected are not applied to individual items, such as disbursements on the bill, we were unable to determine whether individual disbursements were fully reimbursed if the bill was not paid in full. Our analysis found that 32 of the 122 disbursements were fully reimbursed by the responsible party. Additionally, 2 of the 122 disbursements were not fully reimbursed as the responsible party filed bankruptcy and settled with NPFC for a lesser amount; these were excluded from our review, as NPFC recognized these 2 disbursements as paid in full. The remaining 88 items consisted of 27 damage claim disbursements and 61 oil removal disbursements that were over $500,000 (high dollar disbursements) and not fully reimbursed by the responsible party, guarantor(s), or both for fiscal years 2011 through 2013. These 88 disbursements were either fully uncollected or partially reimbursed (see app. II and app. III). The selected 27 damage claim disbursements totaled $136.1 million, which represented 93 percent of total damage claim dollars disbursed during fiscal years 2011 through 2013. The selected 61 oil removal disbursements totaled $65.4 million, which accounted for 30 percent of the total oil removal dollar disbursements during fiscal years 2011 through 2013 (see tables 4 and 5). Because of the low dollar coverage for oil removal disbursements, we also selected a stratified random sample of disbursements less than or equal to $500,000. This sample of 200 oil removal disbursements, which was generalizable, was from the population of 11,093 fiscal year 2011 through 2013 oil removal disbursements. (See table 6.) To determine whether the design of existing internal controls over the damage claim and oil removal processes assure that amounts are appropriately disbursed from the Fund, we (1) reviewed OPA and other federal laws and regulations to obtain an understanding of allowed costs, (2) reviewed Standards for Internal Control in the Federal Government and evaluated the policies and procedures NPFC has in place for damage claim and oil removal disbursements, (3) evaluated potential risks and the effectiveness of NPFC’s controls to mitigate those risks (4) interviewed NPFC officials and staff, and (5) performed walk-throughs of the damage claim and oil removal processes. Based on our review of potential risks and NPFC’s documented controls, we identified key controls for the damage claim and oil removal disbursement processes and tested the implementation of those controls for the damage claim and oil removal disbursements described above and for the statistical sample of oil removal disbursements less than or equal to $500,000. The key controls include verifying that transactions are properly authorized, processed for payment, and recorded. For damage claims, the testing of key controls included reviewing controls to reasonably assure that claimants presented their claims to the responsible party before submitting them to NPFC; reviewing controls related to the processing of claim reconsiderations and reviewing controls related to NPFC’s coordination efforts with the Federal On-Scene Coordinators (FOSC) in adjudicating the claims and identifying responsible parties; reviewing controls to reasonably assure that where applicable, responsible parties were identified and recorded; and reviewing controls to reasonably assure that claim determinations were appropriately reviewed and approved by an appropriate individual, payments were authorized, and each claimant signed a release letter accepting the payment as full and final within the allowable time frame. For oil removal costs, the testing of key controls included reviewing NPFC’s internal controls related to the review and authorization of oil removal activities; reviewing controls around the FOSC’s certification of the appropriateness of oil removal activities; reviewing NPFC’s coordination efforts with the FOSCs in identifying responsible parties; and reviewing controls to reasonably assure that where applicable, responsible parties were identified and recorded and that payments for removal cost activities were appropriately authorized and recorded in the agency’s accounting records. We analyzed the results of these tests to determine if the internal controls in place were effective. To assess the design of internal controls for reasonably assuring that responsible parties were designated and billed, as appropriate, for all disbursements, we (1) reviewed NPFC’s policies and procedures for designating and billing the responsible parties, (2) evaluated potential risks and the effectiveness of NPFC’s controls to mitigate those risks, (3) interviewed NPFC officials and staff, and (4) obtained billings and receipt data for fiscal years 2011 through 2014. To determine if these data were reliable enough for our purposes, we interviewed knowledgeable agency officials about the quality controls associated with the collection of these data. Based on the results of these tests, we concluded that these data were reliable enough for our purposes. Based on our review of potential risks and NPFC’s documented controls, we identified key controls for the designation and billing processes and tested the implementation of these controls for the selected 27 high dollar damage claim disbursements and 61 high dollar oil removal disbursements. We analyzed the results of these tests to determine whether the controls in place were effective. To understand the extent to which the Fund is reimbursed for damage claim and oil removal costs both under and over $500,000, we interviewed NPFC officials about the billing and reimbursement processes. We analyzed disbursement, billing, and collection data obtained from NPFC. We also identified certain examples of when the disbursement from the Fund is not eligible for reimbursement. We conducted this performance audit from February 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 7, 8, and 9 show damage claim disbursements over $500,000 for fiscal years 2011 through 2013, which were not fully reimbursed to the Oil Spill Liability Trust Fund as of March 31, 2015. As of that date, two items had been partially collected, as noted in tables 7 and 8. Tables 10, 11, and 12 show oil removal disbursements over $500,000 for fiscal years 2011 through 2013 that were not fully reimbursed to the Oil Spill Liability Trust Fund as of March 31, 2015. In addition to the contact named above, Kim McGatlin (Assistant Director), Teressa Gardner (Analyst-in-Charge), Justin Fisher, Sarah Florino, Wilfred B. Holloway (Assistant Director), Pierre Kamga, Elizabeth Sodipo, and Andrew Stephens made key contributions to this report.
The Coast Guard Authorization Act of 2010 included a provision for GAO to examine NPFC. This report addresses the extent to which (1) NPFC has designed and implemented internal controls over damage claim and oil removal disbursements to reasonably assure that amounts are appropriately disbursed from the Fund; (2) NPFC has designed and implemented internal controls to reasonably assure that responsible parties are designated and billed, as appropriate, for disbursements from the Fund that are over $500,000; and (3) the Fund was reimbursed for damage claim and oil removal costs in fiscal years 2011 through 2013. GAO also reviewed the Fund's primary source of revenues. GAO obtained and analyzed data on damage claim and oil removal disbursements from fiscal years 2011 through 2013. GAO also obtained and analyzed data on billings and collections for fiscal years 2011 through August 2014 in order to determine which disbursements had been billed and paid. GAO reviewed relevant policies and procedures and interviewed officials and staff at the Coast Guard and EPA. The U.S. Coast Guard's National Pollution Funds Center (NPFC) has responsibility for disbursements from the Oil Spill Liability Trust Fund (Fund).The Fund enables the Coast Guard and the Environmental Protection Agency (EPA) to respond to oil spills. The Oil Pollution Act of 1990 (OPA) authorizes the Fund to pay for certain damage claims and oil removal costs. The federal government may subsequently seek reimbursement of these costs from responsible parties. Damage claims. GAO found that for fiscal years 2011 through 2013, internal controls were designed and implemented to reasonably assure that damage claim expenses were appropriately disbursed from the Fund. Oil removal. GAO identified several internal control deficiencies, which demonstrated that NPFC was unable to reasonably assure that oil removal disbursements were appropriately disbursed from the Fund. GAO's statistical tests of oil removal disbursements less than or equal to $500,000 identified design and implementation control deficiencies involving invoices that lacked required certifications, high visibility spills that were not identified, and missing supporting documentation for some costs. Also, GAO identified other issues, including that NPFC lacked policies and procedures for tracking and reconciling cash advances to EPA. NPFC has established a system of internal controls for the designation and billing, as appropriate, of responsible parties. For fiscal years 2011 through 2013, GAO determined for the amounts over $500,000 that NPFC designed and implemented internal controls to provide reasonable assurance that responsible parties were designated and billed, as appropriate, for damage claim and oil removal disbursements. For fiscal years 2011 through 2013, the Fund disbursed over $360 million, not including disbursements related to the Deepwater Horizon oil spill. During the period, not including the Deepwater Horizon oil spill, NPFC billed $272 million to responsible parties and collected $39 million. GAO found that NPFC was unable to bill for a large percentage of the damage claim and oil removal disbursements over $500,000 because the responsible party had reached its limit of liability, not all elements of liability were established, or the source of the spill could not be identified. OPA authorizes using the Fund for immediate response costs and when responsible parties cannot be identified or pay. GAO analyzed the Fund's sources of income and found the 8-cent per-barrel tax on petroleum products is relied on as the primary, consistent source of funding because the Fund has disbursed more funding than it has been able to recover. This is because the Fund is not reimbursed for certain damage claim and oil removal costs, as noted above. The average amount of the Fund's revenue from the per-barrel tax was 60 percent of the total revenue for fiscal years 2011 through 2013. The per-barrel tax is set to expire at the end of 2017, creating uncertainty with regard to future revenue sources for the Fund. As of September 30, 2014, the Fund's balance was about $4.6 billion, which reflects approximately $1.3 billion in fines from the Deepwater Horizon oil spill. However, these fines are not a consistent funding source. Congress should consider options for sustaining the Fund, as well as the optimal level of funding, to address uncertainty regarding future funding. In addition, GAO is making several recommendations to improve the U.S. Coast Guard's internal controls for oil removal disbursements from the Fund. The Department of Homeland Security concurred with the recommendations and described actions taken or planned for each recommendation.
Our proactive testing found ineffective HUBZone program eligibility controls, exposing the federal government to fraud and abuse. In a related report and testimony released concurrently with this testimony, we reported that SBA generally did not verify the data entered by firms in its online application system. We found that SBA was therefore vulnerable to certifying firms based on fraudulent application information. Our use of bogus firms, fictitious employees, and fabricated explanations and documents to obtain HUBZone certification demonstrated the ease with which HUBZone certification could be obtained by providing fraudulent information to SBA’s online application system. In all four instances, we successfully obtained HUBZone certification from SBA for the bogus firms represented by our applications. See figure 1 for an example of one of the acceptance letters we received. Although SBA requested documentation to support one of our applications, the agency failed to recognize the information we provided in all four applications represented bogus firms that actually failed to meet HUBZone requirements. For instance, the principal office addresses we used included a virtual office suite from which we leased part-time access to office space and mail delivery services for $250 a month, two different retail postal service centers from which we leased mailboxes for less than $24 a month, and a Starbucks coffee store. An Internet search on any of the addresses we provided would have raised “red flags” and should have led to further investigation by SBA, such as a site visit, to determine whether the principal office address met program eligibility requirements. Because HUBZone certification provides an opening to billions of dollars in federal contracts, approval of ineligible firms for participation in the program exposes the federal government to contracting fraud and abuse, and moreover, can result in the exclusion of legitimate HUBZone firms from obtaining government contracts. We provide specific details regarding each application below. Fictitious Application One: Our investigators submitted this fictitious application and received HUBZone certification 3 weeks later. To support the application, we leased, at a cost of $250 a month, virtual office services from an office suite located in a HUBZone and gave this address as our principal office location. Specifically, the terms of the lease allowed us to schedule use of an office space up to 16 hours per month and to have mail delivered to the suite. Our HUBZone application also indicated that our bogus firm employed two individuals with one of the employees residing in a HUBZone. Two business days after submitting the application, an SBA official emailed us requesting a copy of the lease for our principal office location and proof of residency for our employee. We created the documentation using publicly available hardware and software and faxed copies to SBA to comply with the request. SBA then requested additional supporting documentation related to utilities and cancelled checks. After we fabricated this documentation and provided it to SBA, no further documentation was requested before SBA certified our bogus firm. Fictitious Application Two: Four weeks after our investigators submitted this fictitious application, SBA certified the bogus firm to participate in the HUBZone program. For this bogus firm, our “principal office” was a mailbox located in a HUBZone that our investigators leased from a retail postal service provider for less than $24 a month. The application noted that our bogus firm had nine employees, four of which lived in a HUBZone area. SBA requested a clarification regarding a discrepancy in the application information, but no further contact was made before we received our HUBZone certification. Fictitious Application Three: Our investigators completed this fictitious application and received HUBZone certification 2 weeks later. For the principal office address, our investigators used a Starbucks coffee store located in a HUBZone. In addition, our investigators indicated that our bogus firm employed two individuals with one of the employees residing in a HUBZone area. SBA did not request any supporting documentation or explanations for this bogus firm prior to granting HUBZone certification. Fictitious Application Four: Within 5 weeks of submitting this fictitious application, SBA certified our bogus firm. As with fictitious application two, our investigators used the address for a mailbox leased from a retail postal service provider located in a HUBZone for the principal office. Our monthly rental cost for the “principal office” was less than $10 per month. Our application indicated that two of the three employees that worked for the bogus firm lived in a HUBZone. SBA requested a clarification regarding a small discrepancy in the application information, but no further contact was made before receiving the HUBZone certification. We were also able to identify 10 firms from the Washington, D.C., metro area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. Since 2006, federal agencies have obligated a total of more than $105 million to these firms for performance as the prime contractor on federal contracts. Of the 10 firms, 6 did not meet both the principal office and employee residency requirements while 4 met the principal office requirement but significantly failed the employee residency requirement. We also found other HUBZone firms that use virtual office suites to fulfill SBA’s principal office requirement. We investigated two of these virtual office suites and identified examples of firms that could not possibly meet principal office requirements given the nature of their leases. According to HUBZone regulations, persons or firms are subject to criminal penalties for knowingly making false statements or misrepresentations in connection with the HUBZone program including failure to correct “continuing representations” that are no longer true. During the application process, applicants are not only reminded of the program requirements, but are required to agree to the statement that anyone failing to correct “continuing representations” shall be subject to fines, imprisonment, and penalties. Further, the Federal Acquisition Regulation (FAR) requires all prospective contractors to update ORCA— the government’s Online Representations and Certifications Application— which includes certifying whether the firm is currently a HUBZone firm and that there have been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” However, we found that all 10 of these case-study firms continued to represent themselves to SBA, ORCA, GAO, and the general public as eligible to participate in the HUBZone program. Because the 10 case study examples clearly are not eligible, we consider each firm’s continued representation indicative of fraud. We referred the 10 firms to SBA OIG for further investigation. We determined that 10 case study examples from the Washington, D.C., metropolitan area failed to meet the program’s requirements. Specifically, we found that 6 out of the 10 failed both HUBZone requirements to operate a principal office in a HUBZone and to ensure that 35 percent or more of employees resided in a HUBZone. Our review of payroll records also found that the remaining four firms failed to meet the 35 percent HUBZone employee residency requirement by at least 15 percent. In addition, all 10 of the case study examples continued to represent themselves to SBA, ORCA, GAO, and the general public as HUBZone program–eligible. One HUBZone firm self-certified in ORCA that it met HUBZone requirements in March 2008 despite the fact that we had spoken with its owner about 3 weeks before about her firm’s noncompliance with both the principal office and HUBZone residency requirements. Table 1 highlights the 10 case-study firms we investigated. Case 1: Our investigation clearly showed that this firm represented itself as HUBZone-eligible even though it did not meet HUBZone requirements at the time of our investigation. This firm, which provided business management, engineering, information technology, logistics, and technical support services, self-certified in July 2007 in ORCA that it was a HUBZone firm and that there had been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” We also interviewed the president in March 2008 and she claimed that her firm met the HUBZone requirements. However, the firm failed the principal office requirement. Our site visits to the address identified by the firm as its principal office found that it was a small room that had been rented on the upper floor of a dentist’s office where no more than two people could work comfortably. No employees were present, and the only business equipment in the rented room was a computer and filing cabinet. The building owner stated that the president of the firm used to conduct some business from the office, but that nobody had worked there “for some time.” Moreover, the president indicated that instead of paying rent at the HUBZone location, she provided accounting services to the owner at a no-cost exchange for use of the space. See figure 2 for a picture of the building the firm claimed as its principal office (arrow indicates where the office is located). Further investigation revealed that the firm listed its real principal office (called the firm’s “headquarters” on its Web site) at an address in McLean, Virginia. In addition to not being a HUBZone, McLean, Virginia, is in one of the wealthiest jurisdictions in the United States. Our site visit to this second location revealed that the majority of the firm’s officers in addition to about half of the qualifying employees worked there and indicated this location was the firm’s actual principal office. When we interviewed the president, she claimed that the McLean, Virginia, office was maintained “only for appearance.” See figure 3 for a picture of the McLean, Virginia, building where the firm rented office space. Based on our review of payroll documents we received directly from the firm, we also determined the firm failed the 35 percent HUBZone residency requirement. The payroll documents indicated that only 15 of the firm’s 72 employees (21 percent) lived in a HUBZone as of December 2007. We also found that in January 2007 during SBA’s HUBZone recertification process the president self-certified that 38 percent of the firm’s employees lived in a HUBZone. However, the payroll documents received directly from firm showed only 24 percent of the firm’s employees lived in a HUBZone at that time. In 2006 the Department of the Army, National Guard Bureau, awarded a HUBZone set-aside contract with a $40 million ceiling to this firm based on its HUBZone status. Although only $3.9 million have been obligated to date on the contract, because the firm remains HUBZone-certified, it can continue to receive payments up to the $40 million ceiling based on its HUBZone status until 2011. We referred this firm to SBA OIG for further investigation. Case 2: Our investigation determined that this firm, a general contractor specializing in roofing and sheet metal, continued to represent itself as HUBZone-eligible even though it did not meet HUBZone requirements. While he self-certified to the firm’s HUBZone status in ORCA in September 2007, the vice president admitted during our interview in April 2008 that the firm did not meet HUBZone requirements. Nonetheless, after our interview, the firm continued actively to represent that it was a HUBZone firm—including a message in large letters on its Web site and business cards declaring that the firm was “HUBZone certified.” The firm’s vice- president self-certified during the SBA’s HUBZone certification process in March 2007 that, as shown in figure 4, the firm’s principal office was one- half of a residential duplex in Landover, Maryland. We visited this location during normal business hours and found no employees present. Our investigative work also found that the vice president owned another firm, which did not participate in the HUBZone program. A visit to this firm, which was located in Capitol Heights, Maryland—not in a HUBZone—revealed that both it and the HUBZone firm operated out of the same location. Further, payroll documents we received from the HUBZone firm indicated that it had 34 employees but that only 4 employees (or 12 percent) lived in a HUBZone as of December 2007. Based on our analysis of FPDS-NG data, between fiscal years 2006 and 2007 federal agencies obligated about $12.2 million for payment to the firm. Of this, about $4 million in HUBZone contracts were obligated by the Department of the Air Force. Because this firm clearly did not meet either principal office or employee HUBZone requirements at the time of our investigation but continued to represent itself as HUBZone-certified we referred this firm to SBA OIG for further investigation. Case 3: Our investigation demonstrated that this firm continued to represent itself as HUBZone-eligible while failing to meet HUBZone requirements. This firm, which specializes in the design and installation of fire alarm systems, self-certified in May 2007 in ORCA that it was a HUBZone firm and that there had been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” However, when we interviewed the president in April 2008, he acknowledged that the firm “technically” did not meet the principal office requirement. For its HUBZone certification in April 2006, an address in a HUBZone in Rockville, Maryland, was identified as its principal office location. We visited this location during normal business hours and found the address was for an office suite that provided virtual office services. According to the lease between the HUBZone firm and the office suite’s management, the firm did not rent office space, but paid $325 a month to use a conference room on a scheduled basis for up to 4 hours each month. Absent additional services provided by the virtual office suite, it would be impossible for this firm to meet the principal office requirement under this lease arrangement. Moreover, the president of the firm told us that no employees typically worked at the virtual office. Additional investigative work revealed that the firm’s Web site listed a second address for the firm in McLean, Virginia, which as noted above is not in a HUBZone. Our site visit determined this location to be where the firm’s president and all qualifying employees worked. In addition, the payroll documents we received from the firm revealed that the percentage of employees living in a HUBZone during calendar year 2007 ranged from a low of 6 percent to a high of 15 percent—far below the required 35 percent. Based on our analysis of FPDS-NG data, between fiscal years 2006 and 2007 federal agencies obligated about $3.3 million for payment to the firm. Of this, over $460,000 in HUBZone contracts were obligated by the Department of Veterans Affairs. Further, in addition to admitting the firm did not meet the principal office requirement, the president was also very candid about having received subcontracting opportunities from large prime contracting firms based solely on the firm’s HUBZone certification. According to the president, the prime contractors listed the HUBZone firm as part of their “team” to satisfy their HUBZone subcontracting goals. However, he contended that these teaming arrangements only occasionally resulted in the prime contractor purchasing equipment from his firm. Because it continued to represent itself as HUBZone-eligible, we referred it to SBA OIG for further investigation. Virtual offices are located nationwide and provide a range of services for individuals and firms, including part-time use of office space or conference rooms, telephone answering services, and mail forwarding. During our proactive testing discussed above, we leased virtual office services from an office suite located in a HUBZone and fraudulently submitted this address to SBA as our principal office location. The terms of the lease allowed us to schedule use of an office space for up to 16 hours per month, but did not provide permanent office space. Even though we never used the virtual office space we rented, we still obtained HUBZone certification from SBA. Our subsequent investigation of two virtual office suites located in HUBZones—one of which we used to obtain our certification—found that other firms had retained HUBZone certification using virtual office services. Based on our review of lease agreements, we found that, absent additional services provided by the virtual office suites, some of these firms could not possibly meet principal office requirements. For example: One HUBZone firm that claimed its principal office was a virtual office address had a lease agreement providing only mail-forwarding services. The mail was forwarded to a different address not located in a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for this firm to perform any work at the virtual office location with only a mail-forwarding agreement. Five HUBZone firms that claimed their principal office was a virtual office address leased less than 10 hours of conference room usage per month at the same time they maintained at least one other office outside of a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for these firms to meet principal office requirements with only 10 hours of conference room time per month, leading us to conclude that the majority of work at these companies was performed in the other office locations. Five other firms claimed their principal office was a virtual office address but leased office space for less than 20 hours a month. These firms simultaneously maintained at least one other office outside of a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for these firms to meet principal office requirements with only 20 hours of rented office time per month, leading us to conclude that the majority of work at these companies was performed in the other office locations. The virtual office arrangements we investigated clearly violate the requirements of the HUBZone program and, in some cases, exemplify fraudulent representations. We briefed SBA officials on the results of our investigation on July 9, 2008. They were concerned about the vulnerabilities to fraud and abuse we identified. SBA officials expressed interest in pursuing action, including suspension or debarment, against our 10 case study firms and any firm that may falsely represent their eligibility for the HUBZone program. They were also open to suggestions to improve fraud prevention controls over the HUBZone application process, such as performing steps to identify addresses of virtual office suites and mailboxes rented from postal retail centers. Madam Chairwoman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To proactively test whether the Small Business Administration’s (SBA) controls over the Historically Underutilized Business Zone (HUBZone) application process were operating effectively, we applied for HUBZone certification using bogus firms, fictitious employees, fabricated explanations, and counterfeit documents to determine whether SBA would certify firms based on fraudulent information. We used publicly available guidance provided by SBA to create four applications. We did the minimal work required to establish the bogus small business firms represented by our applications, such as obtaining a Data Universal Numbering System (DUNS) number from Dun & Bradstreet and registering with the Central Contractor Registration database. We then applied for HUBZone certification with our four firms using SBA’s online HUBZone application system. Importantly, the principal office addresses we provided to SBA, although technically located in HUBZones, were locations that would appear suspicious if investigated by SBA. When necessary (e.g., at the request of SBA application reviewers), we supplemented our applications with fabricated explanations and counterfeit supporting documentation created with publicly available computer software and hardware and other material. To identify examples of firms that participate in the HUBZone program even though they do not meet eligibility requirements, we first obtained and analyzed a listing of HUBZone firms from the SBA’s Certification Tracking System as of January 2008 and federal procurement data from the Federal Procurement Data System–Next Generation (FPDS-NG) for fiscal years 2006 and 2007. We then performed various steps, including corresponding with SBA officials and testing the data elements used for our work electronically, to assess the reliability of the data. We concluded that data were sufficiently reliable for the purposes of our investigation. To develop our case studies, we limited our investigation to certified HUBZone firms with a principal office located in the Washington, D.C., metropolitan area and for which federal agencies reported obligations on HUBZone preference contracts—HUBZone sole source, HUBZone set- aside, and HUBZone price preference—totaling more than $450,000 for fiscal years 2006 and 2007. We selected 16 for further investigation based on indications that they either failed to operate a principal office in a HUBZone or ensure that at least 35 percent of employees resided in a HUBZone, or both. We also investigated one firm referred through GAO’s FraudNet Hotline. For the selected 17 firms, we then used investigative methods, such as interviewing firm managers and reviewing firm payroll documents, to gather information about the firms and to determine whether the firms met HUBZone requirements. We also reviewed information about each firm in the Online Representations and Certifications Application system (ORCA). During our investigation, we also identified a couple of addresses for virtual office suites in the Washington, D.C., metropolitan area where several different HUBZone firms claimed to have their principal office. We investigated two of these virtual office suites to determine whether HUBZone firms at these locations met program eligibility requirements. For the selected virtual office suites, we obtained and reviewed the lease agreements between the HUBZone firms and the virtual office suite management and verified any of the HUBZone firms’ other business addresses. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Historically Underutilized Business Zone (HUBZone) program is intended to provide federal contracting opportunities to qualified small business firms in order to stimulate development in economically distressed areas. As manager of the HUBZone program, the Small Business Administration (SBA) is responsible for certifying whether firms meet HUBZone program requirements. To participate in the HUBZone program, small business firms must certify that their principal office (i.e., the location where the greatest number of employees work) is located in a HUBZone and that at least 35 percent of the firm's employees live in HUBZones. Given the Committee's concern over fraud and abuse in the HUBZone program, GAO was asked to (1) proactively test whether SBA's controls over the HUBZone application process were operating effectively to limit program certification to eligible firms and (2) identify examples of selected firms that participate in the HUBZone program even though they do not meet eligibility requirements. To perform its proactive testing, GAO created four bogus businesses with fictitious owners and employees and applied for HUBZone certification. GAO also selected 17 HUBZone firms based on certain criteria, such as receipt of HUBZone contracts, and investigated whether they met key program eligibility requirements. GAO identified substantial vulnerabilities in SBA's application and monitoring process, clearly demonstrating that the HUBZone program is vulnerable to fraud and abuse. Considering the findings of a related report and testimony issued today, GAO's work shows that these vulnerabilities exist because SBA does not have an effective fraud-prevention program in place. Using fictitious employee information and fabricated documentation, GAO easily obtained HUBZone certification for four bogus firms. For example, to support one HUBZone application, GAO claimed that its principal office was the same address as a Starbucks coffee store that happened to be located in a HUBZone. If SBA had performed a simple Internet search on the address, it would have been alerted to this fact. Further, two of GAO's applications used leased mailboxes from retail postal services centers. A post office box clearly does not meet SBA's principal office requirement. We were also able to identify 10 firms from the Washington, D.C., metro area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. Since 2006, federal agencies have obligated a total of more than $105 million to these 10 firms for performance as the prime contractor on federal contracts. Of the 10 firms, 6 did not meet both principal office and employee residency requirements while 4 met the principal office requirements but significantly failed the employee residency requirement. For example, one firm that failed both principal office and employee residency requirements had initially qualified for the HUBZone program using the address of a small room above a dentist's office. GAO's site visit to this room found only a computer and filing cabinet. No employees were present, and the building owner told GAO investigators that nobody had worked there "for some time." During its investigation, GAO also found that some HUBZone firms used virtual office suites to fulfill SBA's principal office requirement. GAO investigated two of these virtual office suites and identified examples of firms that could not possibly meet principal office requirements given the nature of their leases. For example, one firm continued to certify it was a HUBZone firm even though its lease only provided mail forwarding services at the virtual office suite.
The use of information technology (IT) to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care. Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged to make the most informed decisions on treatment options, potentially putting the patient’s health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many VA and DOD patients tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. Although they have identified many common health care business needs, both departments have spent large sums of money to develop and operate separate electronic health record systems that they rely on to create and manage patient health information. VA uses its integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which was developed in-house by VA clinicians and IT personnel. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications. Besides being numerous, these applications have been customized at all 128 VA sites. According to the department, this customization increases the cost of maintaining the system, as it requires that maintenance also be customized. In 2001, the Veterans Health Administration undertook an initiative to modernize VistA by standardizing patient data and modernizing the health information software applications. In doing so, its goal was to move from the hospital-centric environment that had long characterized the department’s health care operations to a veteran-centric environment built on an open, robust systems architecture that would more efficiently provide both the same functions and benefits of the existing system and enhanced functions based on computable data. VA planned to take an incremental approach to the initiative, based on six phases that were to be completed in 2018. The department reported spending almost $600 million from 2001 to 2007 on eight projects, including an effort that resulted in a repository containing selected standardized health data, as part of the effort to modernize VistA. In April 2008, the department estimated an $11 billion total cost to complete, by 2018, the modernization that was planned at that time. However, according to VA officials, the modernization effort was terminated in August 2010. For its part, DOD relies on its Armed Forces Health Longitudinal Technology Application (AHLTA), which comprises multiple legacy medical information systems that the department developed from commercial software products that were customized for specific uses. For example, the Composite Health Care System (CHCS), which was formerly DOD’s primary health information system, is still in use to capture information related to pharmacy, radiology, and laboratory order management. In addition, the department uses Essentris (also called the Clinical Information System), a commercial health information system customized to support inpatient treatment at military medical facilities. DOD obligated approximately $2 billion for AHLTA between 1997 and 2010. The department initiated efforts to improve system performance and enhance functionality and planned to continue its efforts to stabilize the AHLTA system through 2015 as a “bridge” to the new electronic health record system it intended to acquire. According to DOD, the planned new electronic health record system—known as the EHR Way Ahead—was to be the department’s comprehensive, real-time health record for service members and their families and beneficiaries. In January 2010, the department initiated an analysis of alternatives for meeting system capability requirements it had identified. A key goal for sharing health information among providers, such as between VA’s and DOD’s health care systems, is achieving interoperability. Interoperability enables different information systems or components to exchange information and to use the information that has been exchanged. Interoperability can be achieved at different levels. At the highest level, electronic data are computable (that is, in a format that a computer can understand and act to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At a still lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. Beyond these, paper records can also be considered interoperable (at the lowest level) because they allow data to be shared, read, and interpreted by human beings. However, they do not provide decision support capabilities, such as automatic alerts about a particular patient’s health, or other reported advantages of automation. We have previously reported that all data may not require the same level of interoperability, nor is interoperability at the highest level achievable in all cases. For example, unstructured, viewable data may be sufficient for such narrative information as clinical notes. Interoperability allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. Interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. Since 1998, VA and DOD have relied on a patchwork of initiatives involving their health information systems to achieve electronic health record interoperability. These have included efforts to share viewable data in existing (legacy) systems; link and share computable data between the departments’ modernized health data repositories; establish and address interoperability objectives to meet specific data-sharing needs; develop a virtual lifetime electronic health record to track patients through active service and veteran status; and implement IT capabilities for the first joint federal health care center. While these initiatives have collectively yielded increased data sharing in various capacities, a number of them have nonetheless been plagued by persistent management challenges, which have created barriers to achieving the fully interoperable electronic health record capabilities long sought. Among the departments’ earliest efforts to achieve interoperability was the Government Computer-Based Patient Record (GCPR) initiative, which was begun in 1998 with the intent of providing an electronic interface that would allow physicians and other authorized users of VA’s and DOD’s health facilities to access data from the other agency’s health facilities. The interface was expected to compile requested patient health information in a temporary, “virtual” record that could be displayed on a user’s computer screen. However, in reporting on this initiative in April 2001, we found that accountability for GCPR was blurred across several management entities and that basic principles of sound IT project planning, development, and oversight had not been followed, creating barriers to progress. For example, clear goals and objectives had not been set; detailed plans for the design, implementation, and testing of the interface had not been developed; and critical decisions were not binding on all partners. While both departments concurred with our recommendations that they, among other things, create comprehensive and coordinated plans for the effort, progress on the initiative continued to be disappointing. The departments subsequently revised the strategy for GCPR and, in May 2002, narrowed the scope of the initiative to focus on enabling DOD to electronically transfer service members’ health information to VA upon their separation from active duty. The initiative— renamed the Federal Health Information Exchange (FHIE)—was completed in 2004. Building on FHIE, VA and DOD also established the Bidirectional Health Information Exchange (BHIE) in 2004, which was aimed at allowing clinicians at both departments viewable access to records on shared patients (that is, those who receive care from both departments, such as veterans who receive outpatient care from VA clinicians and then are hospitalized at a military treatment facility). The interface also enabled DOD sites to see previously inaccessible data at other DOD sites. Further, in March 2004, the departments began an effort to develop an interface linking VA’s Health Data Repository and DOD’s Clinical Data Repository, as part of a long-term initiative to achieve the two-way exchange of health information between the departments’ modernized systems—known as the Clinical Data Repository/Health Data Repository initiative, or CHDR. The departments had planned to be able to exchange selected health information through CHDR by October 2005. However, in June 2004, we reported that the efforts of VA and DOD in this area demonstrated a number of management weaknesses. Among these were the lack of a well-defined architecture for describing the interface for a common health information exchange, an established project management lead entity and structure to guide the investment in the interface and its implementation, and a project management plan defining the technical and managerial processes necessary to satisfy project requirements. Accordingly, we recommended that the departments address these weaknesses, and they agreed to do so. In September 2005, we testified that the departments had improved the management of the CHDR program, but that this program continued to face significant challenges—in particular, with developing a project management plan of sufficient specificity to be an effective guide for the program. In a June 2006 testimony we noted that the project did not meet a previously established milestone: to be able to exchange outpatient pharmacy data, laboratory results, allergy information, and patient demographic information on a limited basis by October 2005. By September 2006, the departments had taken actions which ensured that the CHDR interface linked the departments’ separate repositories of standardized data to enable a two-way exchange of computable outpatient pharmacy and medication allergy information. Nonetheless, we noted that the success of CHDR would depend on the departments instituting a highly disciplined approach to the project’s management. To accelerate the exchange of electronic health information between the two departments, the National Defense Authorization Act (NDAA) for Fiscal Year 2008 included provisions directing VA and DOD to jointly develop and implement, by September 30, 2009, fully interoperable electronic health record systems or capabilities. To facilitate compliance with the act, the departments’ Interagency Clinical Informatics Board, made up of senior clinical leaders who represent the user community, began establishing priorities for interoperable health data between VA and DOD. In this regard, the board was responsible for determining priorities for electronic data sharing between the departments, as well as what data should be viewable and what data should be computable. Based on its work, the board established six interoperability objectives for meeting the departments’ data-sharing needs: Refine social history data: DOD was to begin sharing with VA the social history data that are captured in the DOD electronic health record. Such data describe, for example, patients’ involvement in hazardous activities and tobacco and alcohol use. Share physical exam data: DOD was to provide an initial capability to share with VA its electronic health record information that supports the physical exam process when a service member separates from active military duty. Demonstrate initial network gateway operation: VA and DOD were to demonstrate the operation of secure network gateways to support joint VA-DOD health information sharing. Expand questionnaires and self-assessment tools: DOD was to provide all periodic health assessment data stored in its electronic health record to VA such that questionnaire responses would be viewable with the questions that elicited them. Expand Essentris in DOD: DOD was to expand its inpatient medical records system (CliniComp’s Essentris product suite) to at least one additional site in each military medical department (one Army, one Air Force, and one Navy, for a total of three sites). Demonstrate initial document scanning: DOD was to demonstrate an initial capability for scanning service members’ medical documents into its electronic health record and sharing the documents electronically with VA. The departments asserted that they took actions that met the six objectives and, in conjunction with capabilities previously achieved (e.g., FHIE, BHIE, and CHDR), had met the September 30, 2009, deadline for achieving full interoperability as required by the act. Nonetheless, the departments planned additional work to further increase their interoperable capabilities, stating that these actions reflected the departments’ recognition that clinicians’ needs for interoperable electronic health records are not static. In this regard, the departments focused on additional efforts to meet clinicians’ evolving needs for interoperable capabilities in the areas of social history and physical exam data, expanding implementation of Essentris, and additional testing of document scanning capabilities. Even with these actions, however, we identified a number of challenges the departments faced in managing their efforts in response to the 2008 NDAA. Specifically, we identified challenges with respect to performance measurement, project scheduling, and planning. For example, in a January 2009 report, we noted that the departments’ key plans did not identify results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that are characteristic of effective planning and can be used as a basis to track and assess progress toward the delivery of new interoperable capabilities. We pointed out that without establishing results-oriented goals and reporting progress using measures relative to the established goals, the departments and their stakeholders would not have the comprehensive picture that they needed to effectively manage their progress toward achieving increased interoperability. Accordingly, we recommended that DOD and VA take action to develop such goals and performance measures to be used as a basis for providing meaningful information on the status of the departments’ interoperability initiatives. In response, the departments stated that such goals and measures would be included in the next version of the VA/DOD Joint Executive Council Joint Strategic Plan. However, that plan was not approved until April 2010—7 months after the departments asserted they had met the deadline for achieving full interoperability. In addition to its provisions directing VA and DOD to jointly develop fully interoperable electronic health record systems or capabilities, the 2008 NDAA called for the departments to set up an interagency program office (IPO) to be a single point of accountability for their efforts to implement these systems or capabilities by the September 30, 2009, deadline. Accordingly, in January 2009, the office completed its charter, articulating, among other things, its mission and functions with respect to attaining interoperable electronic health data. The charter further identified the office’s responsibilities for carrying out its mission in areas such as oversight and management, stakeholder communication, and decision making. Among the specific responsibilities identified in the charter was the development of a plan, schedule, and performance measures to guide the departments’ electronic health record interoperability efforts. In July 2009, we reported that the IPO had not fulfilled key management responsibilities identified in its charter, such as the development of an integrated master schedule and a project plan for the department’s efforts to achieve full interoperability. Without these important tools, the office was limited in its ability to effectively manage and meaningfully report progress on the delivery of interoperable capabilities. We recommended that the IPO establish a project plan and a complete and detailed integrated master schedule. In response to our recommendation, the office began to develop an integrated master schedule and project plan that included information about its ongoing interoperability activities. In another attempt at furthering efforts to increase electronic health record interoperability, in April 2009, the President announced that VA and DOD would work together to define and build the Virtual Lifetime Electronic Record (VLER) to streamline the transition of electronic medical, benefits, and administrative information between the two departments. VLER was intended to enable access to electronic records for service members as they transition from military to veteran status, and throughout their lives. Further, the initiative was to expand the departments’ health information- sharing capabilities by enabling access to private-sector health data. Shortly after the April 2009 announcement, VA, DOD, and the IPO began working to define and plan for the initiative’s health data-sharing activities, which they refer to as VLER Health. In June 2009, the departments adopted a phased implementation strategy consisting of a series of 6- month pilot projects to deploy a set of health data exchange capabilities between existing electronic health record systems at sites around the country. Each pilot project was intended to build upon the technical capabilities of its predecessor, resulting in a set of baseline capabilities to inform project planning and guide the implementation of VLER nationwide. In June 2010, the departments announced their goal to deploy VLER Health nationwide by the end of 2012. The first pilot, which started in August 2009, in San Diego, California, resulted in VA, DOD, and Kaiser Permanente being able to share a limited set of test patient data. Subsequently, between March 2010 and January 2011, VA and DOD conducted another pilot in the Tidewater area of southeastern Virginia, which focused on sharing the same data as the San Diego pilot plus additional laboratory data. Further, during 2011, the departments implemented two additional pilots in Washington state. In a February 2011 report on the departments’ efforts to address their common health IT needs, we noted that VA and DOD had identified a high-level approach for implementing VLER and had designated the IPO as the single point of accountability for the effort. departments had not developed a comprehensive plan identifying the target set of capabilities that they intended to demonstrate in the pilot projects and then implement on a nationwide basis at all domestic VA and DOD sites by the end of 2012. Moreover, the departments conducted pilot projects without attending to key planning activities that are necessary to guide the initiative. For example, as of February 2011, the IPO had not developed an approved integrated master schedule, master program plan, or performance metrics for the VLER Health initiative, as outlined in the office’s charter. We noted that if the departments did not address these issues, their ability to effectively deliver capabilities to support their joint health IT needs would be uncertain. We recommended that the Secretaries of VA and DOD strengthen their efforts to establish VLER by developing plans that would include scope definition, cost and schedule estimation, and project plan documentation and approval. Officials from both departments agreed with the recommendation, and we have continued to monitor their actions toward its implementation. Nevertheless, the departments were not successful in meeting their original goal of implementing VLER nationwide by the end of 2012. GAO, Electronic Health Records: DOD and VA Should Remove Barriers and Improve Efforts to Meet Their Common System Needs, GAO-11-265 (Washington, D.C.: Feb. 2, 2011). known as the Captain James A. Lovell Federal Health Care Center (FHCC). The FHCC is unique in that it is to be the first fully integrated federal health care center for use by both VA and DOD beneficiaries, with an integrated workforce, a joint funding source, and a single line of governance. In April 2010, the Secretaries of VA and DOD signed an executive agreement that established the FHCC and, in accordance with the fiscal year 2010 NDAA, defined the relationship between the two departments for operating the new, integrated facility. Among other things, the executive agreement specified three key IT capabilities that VA and DOD were required to have in place by the FHCC’s opening day, in October 2010, to facilitate interoperability of their electronic health record systems: medical single sign-on, which would allow staff to use one screen to access both the VA and DOD electronic health record systems; single patient registration, which would allow staff to register patients in both systems simultaneously; and orders portability, which would allow VA and DOD clinicians to place, manage, and update orders from either department’s electronic health records systems for radiology, laboratory, consults (specialty referrals), and pharmacy services. However, in our February 2011 report, we identified improvements the departments could make to the FHCC effort, noting that project planning for the center’s IT capabilities was incomplete. We specifically noted that the departments had not defined the project scope in a manner that identified all detailed activities. Consequently, they were not positioned to reliably estimate the project cost or establish a baseline schedule that could be used to track project performance. Based on these findings, we expressed concern that VA and DOD had jeopardized their ability to fully and expeditiously provide the FHCC’s needed IT system capabilities. We recommended that the Secretaries of VA and DOD strengthen their efforts to establish the joint IT system capabilities for the FHCC by developing plans that included scope definition, cost and schedule estimation, and project plan documentation and approval. Although officials from both departments stated agreement with our recommendation, the departments’ actions were not sufficient to preclude delays in delivering the FHCC’s IT system capabilities, as we subsequently described in July 2011 and June 2012. Specifically, in a July 2011 report, we noted that none of the three IT capabilities had been implemented by the time of the FHCC’s opening in October 2010, as required by the executive agreement. However, FHCC officials reported that the medical single sign-on and single patient registration capabilities had become operational in December 2010. In June 2012, we again reported on the departments’ efforts to implement the FHCC’s required IT capabilities and found that portions of the orders portability capability—related to the pharmacy and consults components—remained delayed. workarounds that the departments had implemented as a result of the delays, but could not provide a time line for completion of the pharmacy component, and estimated completion of the consults component by March 2013. VA and DOD officials described The officials reported that, as of March 2012, the departments had spent about $122 million on developing and implementing IT capabilities at the FHCC. However, they were unable to quantify the total cost for all of the workarounds resulting from delayed IT capabilities. GAO, VA/DOD Federal Health Care Center: Costly Information Technology Delays Continue and Evaluation Plan Lacking, GAO-12-669 (Washington, D.C.: June 26, 2012). In this report, we noted that orders portability for radiology had become operational in June 2011 and for laboratory in March 2012. health record: (1) develop a new, joint electronic health record system; (2) upgrade either the existing VistA or AHLTA legacy system to meet the needs of the other organization; or (3) continue to pursue separate systems while coordinating on a common infrastructure with data interoperability. In March 2011, the secretaries committed the two departments to the first approach—that is, the development of a new common integrated electronic health record (iEHR) system. In May 2012, they announced their goal of implementing the integrated health record across the departments by 2017. According to the departments, pursuing iEHR was expected to enable VA and DOD to align resources and investments with common business needs and programs, resulting in a platform that would replace the two departments’ separate electronic health record systems with a common system. In addition, because it would involve both departments using the same system, this approach was expected to largely sidestep the challenges they had historically encountered in trying to achieve interoperability between separate systems. The departments developed an iEHR business case in August 2012 to justify this approach, which stated that the use of a common integrated system would support increased collaboration between both departments and would lead to joint investment opportunities. Further, this approach was consistent with a previous study conducted by the departments showing that over 97 percent of inpatient functional requirements were common to both VA and DOD. According to the iEHR business case, the use of a common integrated system would address their similar health information system needs. Toward this end, initial development plans called for the single, joint iEHR system to consist of 54 clinical capabilities that would be delivered in six increments between 2014 and 2017, with all existing applications in VistA and AHLTA continuing uninterrupted until full delivery of the new capabilities. The program had planned to send out requests for proposals (RFP) for initial iEHR capabilities in the first quarter of fiscal year 2013. Among the agreed-upon capabilities to be delivered were those supporting laboratory, anatomic pathology, pharmacy, and immunizations. In addition, the initiative was to deliver several common infrastructure components—an enterprise architecture, presentation layer or graphical user interface, data centers, and interface and exchange standards. The system was to be primarily built by purchasing commercially available solutions for joint use, with noncommercial solutions developed or adopted only when a commercial alternative was unavailable. According to the departments’ plans, initial operating capability, which was to be achieved in 2014, was intended to establish the architecture and include deployment of new immunization and laboratory capabilities to VA and DOD facilities in San Antonio, Texas, and Hampton Roads, Virginia. Full operating capability, planned for 2017, was intended to deploy all iEHR capabilities to all VA and DOD medical facilities. In October 2011, VA and DOD re-chartered the IPO with increased authority and expanded responsibilities for leading the integrated system effort. The charter gave the IPO responsibility for program planning and budgeting, acquisition and development, and implementation of clinical capabilities. In particular, the IPO Director was given authority to acquire, develop, and implement IT systems for iEHR, as well as to develop interagency budget and acquisition strategies that would meet VA’s and DOD’s respective requirements in these areas. Further, as program executive for iEHR, the director of this office was given the authority to use DOD and VA staff to support the program. An estimate developed by the IPO in August 2012 put the cost of the integrated system at $29 billion (adjusted for inflation) from fiscal year 2013 through fiscal year 2029. According to the office’s director, this estimate included $9 billion for the acquisition of the system and $20 billion to sustain its operations. The office reported actually spending about $564 million on iEHR between October 2011 and June 2013. According to the June 2013 IPO expenditure plan, these expenditures included deployment of a new graphical user interface for viewing patient data to selected locations; creation of a development and test center/environment for iEHR; planning efforts required for acquisition of the initial capabilities—laboratory, immunization, and pharmacy with orders services; and acquisition of program management, systems integration, and engineering and testing services required to ensure completion of required planning activities. About 2 years after taking actions toward the development of iEHR, VA and DOD announced changes to their plan—essentially abandoning their effort to develop a single, integrated electronic health record system for both departments. In place of this initiative, the departments stated that VA would modernize its existing VistA health information system, DOD would buy a commercially available system to replace its existing AHLTA system, and the departments would ensure interoperability between the two new systems. However, the decision to change the iEHR program strategy was not justified on the basis of analyses that considered the estimated cost and schedule for the new approach of using separate systems. In addition, while the departments have begun planning for their separate modernization efforts, they have not completed plans describing how and in what time frame they intend to achieve an interoperable electronic health record. In February 2013, the Secretaries of Defense and Veterans Affairs announced that they would not continue with their joint development of a single electronic health record system that was intended to result in an integrated electronic health record. This decision resulted from an assessment of the iEHR program that the secretaries requested in December 2012 because of their concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. Based on this assessment, the departments announced that they would rely on separate systems to achieve an interoperable electronic health record, departing from their originally planned solution of using a single system to meet their similar health information system needs. Specifically, this new approach would involve each department either developing or acquiring a new core set of electronic health record capabilities (e.g., workflow and order management)needed. with additional applications or capabilities to be added as According to senior VA and DOD officials, the development or acquisition of similar core sets of electronic health record capabilities would be achieved by VA modernizing its existing VistA health information system and DOD buying a commercially available system to replace its existing AHLTA health information system. In this regard, VA has stated that it intends to enhance and modernize its existing VistA system under a new program, called VistA Evolution. For its part, in May 2013, DOD announced that it would competitively award a contract to acquire a limited set of core capabilities that might include VistA-based commercial solutions. However, DOD then determined that, because of the need to integrate future capabilities, it would cost more to acquire and add to a limited core set of capabilities than to acquire a full suite of capabilities. Thus, the department subsequently expanded its effort and has stated that it is now pursuing the acquisition of a replacement system for its multiple legacy electronic health record systems under a new program— the DOD Healthcare Management System Modernization (DHMSM) program—that is being managed by DOD’s Under Secretary of Defense for Acquisition, Technology, and Logistics. In addition, the departments have said they intend to focus on existing projects aimed at increasing the interoperability of health data between their legacy systems. These included expanding the use of a graphical user interface for viewing patient information; agreeing upon an approach for jointly identifying patients; developing a secure network infrastructure for VA and DOD clinicians to access patient information; and correlating, or mapping, department data to seven clinical domains and organizing them in a standardized patient record. According to the IPO’s December 18, 2013, report to Congress, the departments completed the initial activities for these projects in December 2013 and outlined further actions the departments plan to take on these efforts. Although VA and DOD based their decision to no longer pursue a single system on the assertion that their new approach to pursue separate systems would be less expensive and faster, the departments have not demonstrated the credibility of this assertion. Best practices have identified the development and use of cost and schedule estimates as essential elements for informed decision making when selecting potential IT investments. In particular, major investment decisions (which can include, for example, terminating or significantly restructuring an ongoing program) should be justified using analyses that compare relative costs and schedules for proposed investments. When effectively implemented, these practices help ensure that agencies have a sound rationale for their investment decisions. However, VA and DOD have proceeded with their current plan without developing cost and schedule analyses to support the assertion that the current plan to pursue separate modernized systems while enabling interoperability between them would be less expensive and could be achieved faster than developing a single system. Consistent with best practices, such analyses would require, for example, development and documentation of revised cost and schedule estimates that include DOD’s commercial acquisition, VA’s modernization of VistA, and the joint interoperability effort, as well as a comparison of these with the estimates for the original single-system approach. Instead of developing such a joint analysis to consider their common health care business needs, however, each department made its own individual determination on what the best course of action would be. These determinations reflect VA’s and DOD’s divergent philosophies for pursuing IT systems development: VA strongly supports in-house development and modernization of its homegrown system, and DOD supports acquiring commercial solutions. Specifically, according to the VA Under Secretary for Health, pursuing a modernization of VistA instead of another solution was an obvious choice for VA because the department already owns the system and has in-house technical expertise to modernize and maintain it. Similarly, DOD considered alternatives to replace its legacy electronic health record system and concluded that pursuing a competitively based commercial system would be best for the department. The Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) stated that acquiring a commercial system was the right business decision for DOD because the department is not in the business of developing IT systems, particularly when more advanced electronic health record solutions are available commercially. He added that VA’s reasons for modernizing VistA were logical for that department but did not apply to DOD. However, neither of the determinations made by VA and DOD considered cost and schedule estimates for modernizing or acquiring the departments’ new systems and achieving interoperability between them. Further, VA and DOD lack a process for identifying joint IT investments, which could be a means of reconciling the departments’ divergent approaches, and is one of the barriers to jointly addressing their health care system needs that we identified in February 2011 and recommended they address. Because their new approach is based on the courses of action that VA and DOD have independently determined to be best for them, and because they lack cost and schedule analyses to guide their decision making, the departments have not demonstrated that their new approach will provide service members, veterans, and their health care providers with an interoperable electronic health record at lower cost and in less time than the original plan. While VA and DOD have begun to pursue separate systems, they have not developed plans at either a strategic or program level that describe how they intend to achieve an interoperable electronic health record. Industry best practices and IT project management principles stress the importance of sound planning for any project.is the development and use of a project management plan that includes the project’s scope, lines of responsibility for all stakeholders, resource requirements, an estimated schedule for development and implementation, and performance measures. Additionally, plans should identify and prioritize program risks so that potential problems can be avoided before they become actual cost, schedule, and performance Inherent in such planning shortfalls. In addition, the National Defense Authorization Act (NDAA) for Fiscal Year 2014 required the departments to provide a detailed programs plan for the oversight and execution of an interoperable electronic health record between the departments no later than January 31, 2014. Since VA and DOD announced their new approach in February 2013, the departments have been focused on planning for their separate modernization efforts: In December 2013, VA developed a VistA Evolution program plan for initial operating capability that is focused on system enhancements for VistA intended to provide at least two enhanced clinical capabilities to be deployed at two VA sites by the end of fiscal year 2014. The department is in the process of developing a separate program plan for VistA Evolution that is intended to provide an overview of VA’s efforts to achieve full operating capability by September 30, 2017. DOD released an initial draft RFP to industry on January 29, 2014, with a goal to release the final RFP for the system’s acquisition in July 2014. According to the DOD Healthcare Management Systems (DHMS) Program Executive Officer, following the release of the RFP, the department plans to award a contract for the replacement system in the third quarter of fiscal year 2015, with a goal of achieving initial operating capability for the program in the fourth quarter of fiscal year 2016. According to a DOD Acquisition Decision Memorandum in January 2014, the DHMS Program Executive Officer is to develop a health data-sharing and interoperability road map that is to address interoperability with VA, private health care providers, and patients. The road map is to be provided to DOD management by March 2014 for review. Additionally, in response to the fiscal year 2014 NDAA, VA and DOD briefed congressional staff in late January 2014 on their plans for VistA Evolution, plans for the DHMSM program, and their intention to achieve an interoperable electronic health record. Despite this briefing and initial steps toward their separate modernization efforts, the departments have not developed a plan that describes how they intend to achieve an interoperable electronic health record under their new approach of pursuing separate system modernizations. Specifically, the departments have not identified which clinical domains of health data will comprise the interoperable electronic health record, the estimated cost and schedule for the effort, or the lines of responsibility for all stakeholders involved. In addition, risks have not been identified and prioritized in order to help avoid potential problems before they become actual cost, schedule, and performance problems. Without having plans in place to provide key information on their effort to create an interoperable electronic health record, the departments are increasing the risk that the new approach will not be more cost efficient and timely than if they had continued with the single-system approach. Moreover, in 2011, we reported that VA’s and DOD’s joint strategic plan did not discuss how or when they proposed to identify and develop joint solutions to address their common health IT needs. Accordingly, we recommended that they revise the joint strategic plan to include information discussing their electronic health record system modernization efforts and how those efforts will address the departments’ common health care business needs. However, the departments’ most recent joint strategic plan, which was released in March 2013 and covers fiscal years 2013 through 2015, does not reflect their current approach. In July 2013, the VA/DOD Joint Executive Council tasked the IPO with preparing an addendum to the joint strategic plan that would reflect the departments’ revised joint activities, milestones, metrics, and time lines for creating an interoperable health record. However, while the departments have begun planning to separately modernize their electronic health record systems and have identified the need to make these systems interoperable, they have not revised their plan for doing so. According to VA and DOD officials, as of January 2014, a draft addendum to the joint strategic plan was being reviewed by the departments’ senior leaders, but the officials could not say when the addendum is to be finalized. Until VA and DOD provide a plan that reflects their current approach, the departments and their stakeholders may not have a shared understanding of how they intend to address their common health care business needs, including an interoperable electronic health record, going forward. We have previously reported on IT management barriers that prevented the departments from effectively collaborating to address their common health care system needs in the areas of enterprise architecture and IT investment management. We have followed the departments’ efforts to address these barriers and have found that important work still remains. In addition, the Interagency Program Office, established by the fiscal year 2008 NDAA to act as a single point of accountability for the departments’ development and implementation of interoperable health records, was to better position the departments to collaborate. Our work on interagency collaboration has shown that successful collaboration depends on a number of factors, including identifying resources, establishing compatible policies and procedures, and agreeing on clear lines of responsibility and accountability. We have also identified a variety of mechanisms that federal agencies use to implement interagency collaborative efforts, including interagency offices, to carry out joint activities on behalf of the participating departments. However, despite the direction given in the fiscal year 2008 NDAA, and the departments’ repeated efforts to re- charter the office, VA and DOD did not implement the IPO as an effective mechanism for interagency collaboration. Specifically, the departments did not provide the IPO with authority over essential resources or with the autonomy to establish key interagency processes for managing joint activities. Additionally, VA and DOD established a complex governance structure for the office, which weakened its ability to serve as the single point of accountability for the departments’ development and implementation of fully interoperable electronic health record systems or capabilities. Moreover, the departments’ December 2013 re-chartering of the IPO significantly reduces the office’s role, responsibilities, and authority over VA and DOD’s joint health IT efforts, and raises concerns about the office’s ability to serve as an effective mechanism for interagency collaboration and the single point of accountability for the departments’ joint health IT efforts. In February 2011, we highlighted barriers that VA and DOD faced in addressing their common health IT needs. For example, although VA and DOD had taken steps toward developing and maintaining artifacts related to a joint health architecture (i.e., a description of business processes and supporting technologies), the architecture was not sufficiently mature to guide the departments’ joint health IT modernization efforts. Further, the departments had not established a joint process for selecting IT investments based on criteria that consider cost, benefit, schedule, and risk elements, limiting their ability to pursue joint health IT solutions that both meet their needs and provide better value and benefits to the government as a whole. We noted that without having these key IT management capabilities in place, the departments would continue to face barriers to identifying and implementing IT solutions that addressed their common needs. Accordingly, we identified several actions that the Secretaries of Defense and Veterans Affairs could take to overcome these barriers, including the following: Further develop the departments’ joint health architecture to include the planned future state and plan for transitioning from their current state to the next generation of electronic health record capabilities. Define and implement a process, including criteria that consider costs, benefits, schedule, and risks, for identifying and selecting joint IT investments to meet the departments’ common health care business needs. Officials from both VA and DOD agreed with these recommendations, and we have continued to monitor their actions toward implementing them. Nonetheless, the actions taken by VA and DOD have not been sufficient to overcome the departments’ long-standing barriers to collaborating on their joint health IT efforts, and important work remains. For example, VA and DOD have not further developed a joint health architecture that could guide their efforts to address their common health care business needs, as we recommended. The departments had undertaken certain actions, but these have been overtaken by events or are tangential to developing the architecture. For example, in January 2013 the IPO developed an Enterprise Architecture Management Plan to provide guidance for developing joint architecture products, identify architecture governance bodies and stakeholder responsibilities, and propose high-level time lines for architecture-related activities. However, according to VA and DOD officials, this plan is no longer operative because it does not reflect the departments’ decision to pursue separate electronic health record system modernization efforts. In addition, in December 2013 the departments revised the charter for the IPO, which describes the importance of identifying and adopting health IT standards to seamlessly integrate VA and DOD health care record data. The charter also specifies that the IPO is responsible for working with the departments’ Health Architecture Review Board to ensure that both departments are appropriately synchronized and coordinated. While these recent activities are peripherally related to development of the joint health architecture, VA and DOD have not yet developed architecture artifacts that describe their planned future state and how they intend to transition to that future state. Until the departments have an understanding of the common business processes and technologies that a joint health architecture can provide, they will continue to lack an essential tool for jointly addressing their common health IT needs. Further, VA and DOD initiated, but did not sustain, two courses of action that were potentially responsive to our recommendation to establish a joint IT investment management process. First, the departments established the IPO Advisory Board in October 2011 to monitor the iEHR program’s progress toward meeting cost, schedule, and performance milestones. However, the advisory board did not meet after June 2013 and was disbanded as a result of the departments’ decision to pursue separate modernizations of their electronic health record systems. Second, in August 2012 the departments established a working group under the Interagency Clinical Informatics Board to identify potential health IT investments for the departments to consider for joint adoption. However, the group has not met since June 2013 and, according to VA and DOD officials, its activities have been suspended while the departments continue to define their separate modernization efforts and their electronic health data interoperability needs. Moreover, the group was not involved in helping the departments identify and select the separate electronic health record investments VA and DOD now plan to undertake to meet their common health care business needs. Because VA and DOD have not implemented a process for identifying and selecting joint IT investments, the departments have not demonstrated that their approach to meeting their common health care business needs has considered the costs, benefits, schedule, and risks of planned investments. Best practices recognize that an office such as the IPO has the potential to serve as a mechanism for interagency collaboration, provided that the collaborating departments adopt a number of practices to sustain it. These include identifying resources, establishing compatible policies and procedures, and agreeing on clear lines of responsibility and accountability, including how the collaborative effort will be led. Best practices have also found that without this, the collaborating departments may not be willing to fully commit to the joint effort, and may also be unable to overcome other barriers, such as concerns about protecting jurisdiction over missions and control over resources. Despite VA and DOD’s pledge to work together to address their common health IT needs, the departments did not implement the IPO consistent with best practices for interagency collaboration and, in some cases, with Specifically, the departments did not follow through the office’s charter. with commitments made in the IPO’s 2011 charter related to its authority over the iEHR program’s budget, staffing, and interagency processes. In addition, the departments implemented the office with multiple layers of governance and oversight, which has resulted in unclear lines of authority and accountability for the departments’ collaborative health IT efforts. The departments have issued four charters since the IPO was established in law in 2008. The IPO’s first charter was signed by the Under Secretary of Defense for Personnel and Readiness and Deputy Secretary of VA in January 2009. Both the second and third charters were signed by the Deputy Secretary of Defense and Deputy Secretary of VA in September 2009 and October 2011, respectively. Finally, the IPO’s fourth charter was signed in December 2013 by the Under Secretary of Defense for Acquisition, Technology, and Logistics and the VA Executive in Charge, Office of Information and Technology and Chief Information Officer. efforts. For example, in July 2011 a former director of the office testified that the IPO’s 2009 charter had established a modest role for the office, and thus, the office did not have control over the budget for those initiatives for which it was responsible; rather, this control remained with VA and DOD. When the departments re-chartered the IPO in 2011, they included language related to the office having budgetary control over the iEHR program. For example, this charter gave the IPO Director the authority to manage budgeting and finances related to the planning and acquisition of the iEHR capabilities. In addition, the charter provided the director with the authority to develop and propose interagency budget submissions for iEHR to the departments. Nevertheless, even with these revisions to its charter, the IPO was not fully empowered to execute funds related to iEHR because the departments have different processes for budgeting IT programs and, in VA’s case, for releasing funds for IT development. According to the Deputy Chief Management Officer, DOD had a dedicated fund for the iEHR program, which the IPO Director had authority to execute. However, VA funded the iEHR program through several funds, including IT appropriations that VA officials asserted could only be executed by the Chief Information Officer (CIO). As a result, the IPO Director was required to request funding for iEHR-related activities from VA on a project-by-project basis. According to one of the iEHR program managers, although this process did not necessarily cause delays to iEHR projects, it was a source of continuous frustration for the IPO Director because it did not provide the expected level of control over the program’s budget, as described in the office’s charter. Staffing: When VA and DOD designated the IPO to lead the iEHR program in 2011, they recognized that the office would need to be expanded to accommodate its new responsibilities. To this end, the departments and the IPO determined that the office would require a significant increase in personnel—more than 7 times the number of staff originally allotted to the office by VA and DOD—to complete hiring under the office’s 2011 charter. However, while each of the departments provided personnel to the IPO through reassignments and short-term details of personnel, the departments did not fully staff the office as planned. For example, a staffing report from early November 2012 showed that, at that time, the IPO was staffed at about 60 percent. Specifically, while the office consisted of 101 reassigned VA and DOD staff and 43 detailed staff, 95 positions remained vacant. Further, in January 2013, the IPO Director stated that the office was staffed at approximately 62 percent and that hiring additional staff remained one of its biggest challenges, partly due to a hiring freeze within the TRICARE Management Activity. In addition, VA’s iEHR program manager noted that recruiting staff for the IPO was a persistent challenge because the departments required health IT professionals with specialized technical expertise. Further, the official noted that VA faced a disadvantage in hiring qualified candidates because it had to compete with private-sector companies and also had decided to generally limit the hiring pool to candidates in the Washington, D.C., area. Within their respective departments, VA and DOD have established their own processes for managing acquisitions and contracting. Although the IPO had a contracting officer on staff at the time of our review, all of the contracts for work conducted for the iEHR program had been issued and managed through existing VA and DOD contracting offices, including VA’s Technology Acquisition Center, the Space and Naval Warfare Systems Command, and the United States Army Medical Research Acquisition Activity. inefficient approach.Information and Technology, this decision created an undue burden on the iEHR program office because it had to meet the requirements of two different contracting and acquisition processes. For example, according to iEHR program documentation, the office would have had to develop over 1,300 documents for one of the planned iEHR increments composed of 14 projects in order to comply with both departments’ acquisition requirements. Although the iEHR program was redirected before the IPO made significant progress toward acquiring joint EHR capabilities, this provides an example of one area where the departments were unable to compromise on their own processes in order to further their common health IT goals. The IPO’s 2011 charter provided DOD’s Deputy Chief Management Officer and VA’s Assistant Secretary for Information and Technology with operational oversight of the IPO. In addition the charter cited the Assistant Secretary of Defense for Health Affairs and the Under Secretary of Defense for Personnel and Readiness as having authority, direction, and control over the IPO, due to the office’s organizational placement within DOD for the purposes of administrative management and supervision. Note that on October 1, 2013, DOD established the Defense Health Agency to manage the activities of the Military Health System (including TRICARE Management Activity). office’s charter, and was expected to seek consensus from VA and DOD supervising officials or the IPO’s governance organizations before proceeding. Conversely, one of the IPO’s governing bodies raised concerns about the office’s willingness to appropriately involve them in the iEHR program. Specifically, the co-chairs of the Health Architecture Review Board raised concerns to the Health Executive Committee that the IPO had not been receptive to involving the board throughout the design and acquisition process for the iEHR program. According to these officials, the board’s inability to participate throughout the process resulted in unnecessary delays to the IT acquisition process. In a December 2012 assessment prepared to help define the iEHR program’s new direction, VA and DOD officials cited governance and oversight as challenges to the program, including group decision making. In an effort to mitigate this problem, the departments chose to shift decision-making authority away from the IPO Director and in January 2013 established an executive committee of two VA and two DOD executive officials to oversee the IPO and make decisions for the iEHR program. Given the changes that VA and DOD have made to their approach for developing an interoperable electronic health record, it remains to be seen how the departments will proceed with implementing the IPO and to what extent the office will be leveraged as a mechanism for effective interagency collaboration. Nevertheless, until VA and DOD address these long-standing issues, their ability to effectively collaborate through the IPO on their joint health IT efforts will be limited. As stated earlier, the fiscal year 2008 NDAA established the IPO under the direction, supervision, and control of both the Secretaries of VA and Defense to serve as the single point of accountability for the departments’ development and implementation of interoperable electronic health records. The IPO was to better position the departments to collaborate on joint health IT initiatives. However, the departments recently made decisions that reduced the IPO’s role, responsibilities, and authority over the departments’ joint health IT efforts, jeopardizing its ability to serve as the single point of accountability for the development and implementation of interoperable electronic health records. In December 2013, VA and DOD revised the IPO’s charter, thus reducing the office’s responsibilities from leading and managing all aspects of the iEHR program to overseeing the departments’ adoption of health data standards for ensuring integration of health data between their modernized health IT systems. For example, the IPO’s 2011 charter authorized the office to lead and manage all interagency planning, programming and budgeting, contracting, acquisition, data strategy and management (including identifying standards for interoperability), testing, and implementation for the iEHR program. In contrast, under the revised charter, the IPO is to engage with national and international health standards-setting organizations to ensure their resulting standards meet the needs of VA and DOD; identify data and messaging standards for VA and DOD health IT solutions; and monitor and report on the departments’ use of and compliance with the adopted standards. Moreover, the revised charter does not acknowledge or address the office’s long-standing weaknesses related to budgetary control, staffing, developing interagency processes, and governance. Specifically: Although the 2013 charter describes how the departments generally intend to share the costs of their planned interoperability work, VA and DOD have not explicitly addressed whether or not the IPO Director has budgetary control over the office’s initiatives. As written, the charter suggests that this authority will remain with the departments. Similar to the 2011 charter, the 2013 charter states that the departments will rely on a combination of reassigned VA and DOD personnel and detailees to fill the IPO’s positions. As of early January 2014, VA and DOD officials stated that they were in the process of transitioning IPO personnel back to their respective departments, and were identifying individuals to serve as leads within each department for their joint interoperability projects. However, although these officials stated that they anticipate the office will require significantly fewer personnel than expected under the iEHR program, staffing for the IPO remains uncertain. Moreover, the departments have not yet addressed how to competitively recruit and retain personnel with the required technical expertise to develop and implement an interoperable electronic health record. The 2013 charter does not explicitly address the extent to which the IPO has the authority to develop interagency processes to fulfill its mission, although it is implied in the office’s responsibilities. For example, the charter states that the IPO will work with the Health Architecture Review Board “to ensure that both departments are appropriately synchronized and coordinated”; yet, according to the co- chairs of this board, the details of this process have not been discussed or defined. In addition, despite the IPO’s reduced role and responsibilities, the 2013 charter maintains a complex governance structure. For example, the charter states that the IPO Director reports through the DHMS Program Executive Officer to the Under Secretary of Defense (AT&L), while the IPO Deputy Director reports through the IPO Director to the VA Assistant Secretary for Information and Technology and CIO. However, the charter does not describe whether or how the IPO Director reports to VA leadership. Further, the charter identifies numerous executive-level individuals and organizations to provide direction, oversight, and guidance to the IPO, including the Joint Executive Committee, the Under Secretary of Defense (AT&L), the VA CIO, and a DOD/VA Senior Stakeholder Group that will include functional, technical, acquisition, and resource leadership from both departments.oversight, it is unclear to what extent the IPO leadership will have decision-making authority over the office’s interoperability efforts. Given this extensive level of management and Further, the IPO’s 2013 charter maintains that the office will remain the single point of accountability for the development and implementation of interoperable electronic health records between VA and DOD. However, in addition to reducing the IPO’s role, responsibilities, and authority over these efforts in its 2013 charter, the departments have identified other offices to execute health data interoperability initiatives formerly managed by the IPO. For example, in January 2014, the Under Secretary of Defense (AT&L) decided to consolidate the execution of all DOD IT health data-sharing projects formerly managed by the IPO and the Defense Health Agency within a new program office under the DHMS Program Executive Officer. These projects include VLER Health, ongoing data federation efforts, and longtime data-sharing initiatives with VA, including the Federal Health Information Exchange, the Bidirectional Health Information Exchange, and the Clinical Data Repository/Health Data Repository. According to the decision memo, resources associated with these health data interoperability efforts will be reassigned from the IPO and the Defense Health Agency to the DHMSM program. Similarly, in January 2014 the Veterans Health Administration’s Chief Medical Informatics Officer stated that interoperability programs are in the process of being consolidated under their Office of Health Informatics and Analytics and will be managed along with VA’s Office of Information and Technology. Overall, a disconnect exists between the IPO’s responsibility to serve as VA and DOD’s single point of accountability for their health data interoperability efforts and the role described in the office’s December 2013 charter. When asked how the IPO will be able to serve as the single point of accountability for the departments’ joint health IT efforts given these changes, the DHMS Program Executive Officer stated that he did not think that the changes impact the IPO’s role at all because the office is responsible for ensuring that the departments adopt a sound technical approach for interoperability. Nevertheless, VA’s and DOD’s decisions to diminish the IPO’s role and move responsibilities for interoperability elsewhere within their respective departments jeopardize the office’s ability to serve as the departments’ single point of accountability for the development and implementation of interoperable electronic health records. Moreover, the departments’ recent actions raise concerns about their intention to use the IPO as a mechanism for collaboration going forward. VA and DOD lost valuable time toward providing service members, veterans, and their health care providers with a long-awaited interoperable electronic health record by agreeing to initiate joint development of a single system in March 2011, and then deciding in February 2013 that the endeavor was too expensive and that the planned system would take too long to develop. The departments are now in the process of planning to use separate systems—VA intends to modernize its existing VistA system and DOD plans to acquire a commercially available system—while they are also to jointly develop capabilities to provide interoperability between the systems. In abandoning the single- system approach, the departments asserted that their new, multiple- system approach will be less expensive and faster. However, the departments’ assertion is questionable because they have not developed cost and schedule estimates to substantiate their claim or justify their decision. In the absence of credible analyses to guide decisions about how to cost-effectively and expeditiously develop the interoperable electronic health record needed to provide service members and veterans with the best possible care, VA and DOD have fallen back on the divergent approaches that each department has determined to be best for it—VA intends to modernize VistA, and DOD expects to acquire a new commercially available system. While the departments have begun planning for these separate systems, they have yet to develop plans describing what a future interoperable health record will consist of or how, when, and at what cost it will be achieved. Further, even though VA and DOD have determined that their electronic health record system needs overlap, the departments have neither removed long-standing barriers to working together to address their common needs nor positioned the Interagency Program Office for effective collaboration going forward. Their slow pace in addressing recommendations we made to address these barriers has hindered their efforts to identify and implement IT solutions that meet their common needs. Further, the departments’ failure to implement the IPO consistent with effective collaboration practices may hamper its efforts to serve as a focal point for future collaboration. Moreover, the departments’ recent decisions to move certain interoperability responsibilities to other offices within VA and DOD may further undermine the IPO’s effectiveness. Because the IPO is expected to play a key role—establishing interoperability between VA’s modernized VistA and DOD’s to-be- acquired system—it is important that the departments take steps to better implement the office as an effective mechanism for collaboration and the single point of accountability for their joint health IT efforts. To bring transparency and credibility to the Secretaries of Veterans Affairs and Defense’s assertion that VA and DOD’s current approach to achieving an interoperable electronic health record will cost less and take less time than the previous single-system approach, we recommend that the secretaries develop a cost and schedule estimate for their current approach, from the perspective of both departments, that includes the estimated cost and schedule of VA’s VistA Evolution program, DOD’s DHMSM program, and the departments’ joint efforts to achieve interoperability between the two systems; then, compare the cost and schedule estimates of the departments’ current and previous (i.e., single-system) approaches. If the results of the comparison indicate that the departments’ current approach is estimated to cost more and/or take longer than the single- system approach, provide a rationale for pursuing the current approach despite its higher cost and/or longer schedule and report the cost and schedule estimates of the current and previous approaches, results of the comparison of the estimates, and reasons (if applicable) for pursuing a more costly or time-consuming approach to VA’s and DOD’s congressional authorizing and appropriations committees. To better position VA and DOD to achieve an interoperable electronic health record, we recommend that the Secretaries of Veterans Affairs and Defense develop a plan that, at a minimum, describes the clinical domains that the interoperable electronic health record will address; a schedule for implementing the interoperable record at each VA and DOD location; the estimated cost of each major component (i.e., VistA Evolution, DHMSM, etc.) and the total cost of the departments’ interoperability efforts; the organizations within VA and DOD that are involved in acquiring, developing, and implementing the record, as well as the roles and responsibilities of these organizations; major risks to the departments’ interoperability efforts and mitigation plans for those risks; and the departments’ approach to defining, measuring, tracking, and reporting progress toward achieving expected performance (i.e., benefits and results) of the interoperable record. To better position the Interagency Program Office for effective collaboration between VA and DOD and to efficiently and effectively fulfill the office’s stated purpose of functioning as the single point of accountability for achieving interoperability between the departments’ electronic health record systems, we recommend that the Secretaries of Veterans Affairs and Defense ensure that the IPO has authority over dedicated resources (e.g., budget and staff), to develop interagency processes, and to make decisions over the departments’ interoperability efforts. We received written comments on a draft of this report (reprinted in appendix II), signed by the VA Chief of Staff and the Acting Under Secretary of Defense for Personnel and Readiness. In their comments, the departments concurred with our recommendations and noted actions that were being taken. In particular, with regard to our recommendation that VA and DOD develop cost and schedule estimates for their current approach to creating an interoperable electronic health record, and then compare them with the estimated cost and schedule for the iEHR approach, both departments said they have these actions under way and that initial comparisons have indicated that their current approach will be more cost effective. Further, with regard to our recommendation calling for a detailed interoperability plan, the departments stated that they are developing such a plan. Lastly, with respect to our recommendation to strengthen the IPO for effective collaboration, the departments stated that the IPO will remain the single point of accountability for achieving interoperability between VA’s and DOD’s electronic health record systems. If the departments fully implement our recommendations, they should be better positioned to economically and efficiently achieve the interoperable electronic health record they have long pursued. VA and DOD also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Veterans Affairs, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this study were to (1) describe changes the Department of Defense (DOD) and Department of Veterans Affairs (VA) have made to the Integrated Electronic Health Record (iEHR) program since its inception, and evaluate the departments’ current plans for the program and (2) determine whether the departments, including the DOD/VA Interagency Program Office (IPO), are effectively collaborating on management of the iEHR program. To describe the changes to the iEHR program since its inception, we obtained and reviewed minutes and briefing slides from meetings held between the VA and DOD Secretaries between February 2011 and February 2013. In addition, we obtained and reviewed DOD acquisition decision memorandums issued between 2011 and 2013 and minutes and briefing slides from meetings for the IPO Advisory Board between April 2012 and April 2013. We also reviewed iEHR program documentation, including the business case, program management plan, integrated program-level requirements document, the June 2013 iEHR expenditure plan, and program management review briefings. To evaluate the current plans for the program, we reviewed documentation and plans supporting efforts to complete four iEHR near- term projects, including iEHR project briefing slides and iEHR program management review briefings. We obtained information on the departments’ new health modernization efforts, VA’s VistA Evolution program and DOD’s Healthcare Management System Modernization program, through interviews with relevant officials. We also attended three iEHR and health information exchange summits in Washington, D.C., and Alexandria, Virginia. In addition, we compared statements made and documentation the departments provided to support the shift in the program strategy for iEHR against effective management practices. To determine the effectiveness of collaboration by VA, DOD, and the IPO, we identified and analyzed the departments’ actions in response to recommendations we previously made to address barriers VA and DOD faced in addressing their common health IT needs.analyzed the 2011 and 2013 IPO charters and compared them to the requirements that were established for the IPO in the National Defense Authorization Act for 2008. We focused our analysis in the areas of funding, staffing, and interagency processes and compared written and verbal information on the departments’ implementation of the IPO against best practices for facilitating interagency collaboration. We also analyzed the governance structure for the IPO and the iEHR program, including organizational charts and charters that established the reporting structure between the IPO, VA and DOD, and several interagency organizations designated to provide oversight to the iEHR program. To better understand the decision making for the program, we analyzed briefing slides and minutes from the secretaries’ quarterly meetings, and the IPO Advisory Board’s bi-weekly meetings, as well as iEHR-related decision memorandums issued by the departments. We supplemented our analyses with interviews of VA, DOD, and IPO officials with knowledge of the iEHR Program, including VA’s Under Secretary for Health, VA’s Assistant Secretary for Information and Technology and Chief Information Officer, DOD’s Assistant Secretary of Defense for Health Affairs, DOD’s Deputy Chief Management Officer, and the IPO Director. We conducted this performance audit from September 2012 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mark T. Bird (Assistant Director), Heather A. Collins, Kelly R. Dodson, Lee McCracken, Brandon S. Pettis, Umesh Thakkar, and Eric Trout made key contributions to this report.
VA and DOD operate two of the nation's largest health care systems, serving approximately 16 million veterans and active duty service members, and their beneficiaries, at total annual costs of over $100 billion. The departments have recognized the importance of developing capabilities for sharing electronic patient health information and have worked since 1998 to develop such capabilities. In February 2011, VA and DOD initiated a program to develop a single, common electronic health record system—iEHR—to replace their existing health record systems. This program was to be managed by the IPO and implemented by 2017. However, the departments made significant changes to the program in 2013. GAO was asked to review the iEHR program. This report (1) describes changes to the program and evaluates the departments' current plans and (2) determines whether the departments are effectively collaborating on management of the program. GAO reviewed relevant program documents and interviewed agency officials. The Departments of Veterans Affairs (VA) and Defense (DOD) abandoned their plans to develop an integrated electronic health record (iEHR) system and are instead pursuing separate efforts to modernize or replace their existing systems in an attempt to create an interoperable electronic health record. Specifically, in February 2013, the secretaries cited challenges in the cost and schedule for developing the single, integrated system and announced that each department would focus instead on either building or acquiring similar core sets of electronic health record capabilities, then ensuring interoperability between them. However, VA and DOD have not substantiated their claims that the current approach will be less expensive and more timely than the single-system approach. Major investment decisions—including terminating or significantly restructuring an ongoing program—should be justified using analyses that compare the costs and schedules of alternative proposals. Yet, the departments have not developed revised cost and schedule estimates for their new modernization efforts and any additional efforts needed to achieve interoperability between the new systems, and compared them with the relevant estimates for their former approach. In the absence of such a comparison, VA and DOD lack assurance that they are pursuing the most cost-effective and timely course of action for delivering the fully interoperable electronic health record the departments have long pursued to provide the best possible care for service members and veterans. The departments have initiated their separate system efforts. VA intends to deploy clinical capabilities of its new system at two locations by September 2014, and DOD has set a goal of beginning deployment of its new system by the end of fiscal year 2016. However, the departments have yet to update their joint strategic plan to reflect the new approach or to disclose what the interoperable electronic health record will consist of, as well as how, when, and at what cost it will be achieved. Without plans that include the scope, lines of responsibility, resource requirements, and an estimated schedule for achieving an interoperable health record, VA, DOD, and their stakeholders may not have a shared understanding of how the departments intend to address their common health care business needs. VA and DOD have not addressed management barriers to effective collaboration on their joint health information technology (IT) efforts. As GAO previously reported, the departments faced barriers to effective collaboration in the areas of enterprise architecture and IT investment management, among others. However, they have yet to address these barriers by, for example, developing a joint health care architecture or a joint IT investment management process to guide their collaboration. Further, the Interagency Program Office (IPO), established by law to act as a single point of accountability for the departments' development of interoperable health records, was to better position the departments to collaborate; but the departments have not implemented the IPO in a manner consistent with effective collaboration. For example, the IPO lacks effective control over essential resources such as funding and staffing. In addition, recent decisions by the departments have diffused responsibility for achieving integrated health records, potentially undermining the IPO's intended role as the point of accountability. Providing the IPO with control over essential resources and clearer lines of authority would better position it for effective collaboration. GAO recommends that VA and DOD develop and compare the estimated cost and schedule of their current and previous approaches to creating an interoperable electronic health record and, if applicable, provide a rationale for pursuing a more costly or time-consuming approach. GAO also recommends that the departments develop plans for interoperability and ensure the IPO has control over needed resources and clearer lines of authority. VA and DOD concurred with GAO's recommendations.
Historically, the census has focused on counting people stateside, although various overseas population groups have been included in the census at different times. For example, as shown in table 1, over the last century, the Bureau has generally included federally affiliated individuals and their dependents, but except for the 1960 and 1970 Censuses, has excluded private citizens such as retirees, students, and business people. In addition, only the 1970, 1990, and 2000 Censuses used counts of federally affiliated personnel for purposes of apportioning Congress. As a result, although estimates exceed four million people, the precise number of Americans residing abroad is unknown. The Constitution and federal statutes give the Bureau discretion over whether to count Americans overseas. Thus, Congress would need to enact legislation if it wanted to require the Bureau to include overseas Americans in the 2010 Census. Nevertheless, in recent years, the Bureau’s policy of excluding private citizens from the census has been questioned. For example, advocates of an overseas census claim that better data on this population group would be useful for a variety of policy-making and other purposes. Moreover, the overseas population could, in some instances, affect congressional apportionment. More generally, the rights and obligations of overseas Americans under various federal programs vary from activity to activity. For example, U.S. citizens residing overseas are taxed on their worldwide income, can vote in federal elections, and can receive Social Security benefits, but they are generally not entitled to Medicare benefits, or, if they reside outside of the United States for more than 30 days, Supplemental Security Income. The initial results of the overseas census test suggest that counting Americans abroad on a global basis would require enormous resources and still not yield data that are comparable in quality to the stateside count. Indeed, participation in the test was low and relatively costly to obtain, and on-site supervision of field activities proved difficult. The test made clear that the current approach to counting Americans abroad—a voluntary survey that relies largely on marketing to get people to participate—by itself cannot secure a successful head count. To promote the overseas census test the Bureau relied on third parties— American organizations and businesses in the three countries—to communicate to their members and/or customers that an overseas enumeration of Americans was taking place and to make available to U.S. citizens either the paper questionnaire or Web site address where Americans could complete their forms via the Internet. Still, the response to the overseas census test was disappointing. The 5,390 responses the Bureau received from the three test countries was far below what the Bureau planned for when it printed the questionnaires. While the Bureau ordered 520,000 paper forms for the three test sites, only 1,783 census forms were completed and returned. Of these, 35 were Spanish language forms that were made available in Mexico. The remaining 3,607 responses were completed via the Internet. Table 2 shows the number of census questionnaires that the Bureau printed for each country and the number of responses it actually received in both the paper format and via the Internet. In May, to help boost the lagging participation, the Bureau initiated a paid advertising campaign that included print and Internet ads in France, and print and radio ads in Mexico. (See fig. 1 for examples of the ads used in the paid advertising campaign). According to a Bureau official, the ads had only a slight impact on response levels. Moreover, the Bureau’s experience during the 2000 Census suggests that securing a higher return rate on an overseas census would be an enormous challenge and may not be feasible. The Bureau spent $374 million on a comprehensive marketing, communications, and partnership effort for the 2000 Census. The campaign began in the fall of 1999 and continued past Census Day (April 1, 2000). Specific elements included television, radio, and other mass media advertising; promotions and special events; and a census-in-schools program. Thus, over a period of several months, the American public was on the receiving end of a steady drumbeat of advertising aimed at publicizing the census and motivating them to respond. This endeavor, in concert with an ambitious partnership effort with governmental, private, social service, and other organizations helped produce a return rate of 72 percent. Replicating this level of effort on a worldwide basis would be impractical, and still would not produce a complete count. Indeed, even after the Bureau’s aggressive marketing effort in 2000, it still had to follow-up with about 42 million households that did not return their census forms. Because the overseas test had such low participation levels, the unit cost of each response was high—roughly $1,450 for each returned questionnaire, based on the $7.8 million the Bureau spent preparing for, implementing, and evaluating the 2004 overseas test. Although the two surveys are not directly comparable because the 2000 Census costs covered operations not used in the overseas test, the unit cost of the 2000 Census—which was the most expensive in our nation’s history--was about $56 per household. Not surprisingly, as with any operation as complex as the overseas enumeration test, various unforeseen problems arose. The difficulties included grappling with country-specific issues and overseeing the contractor responsible for raising public awareness of the census at the three test sites. While the Bureau was able to address them, it is doubtful that the Bureau would have the ability to do so in 2010 should there be a full overseas enumeration. The Bureau encountered a variety of implementation problems at each of the test sites. Although such difficulties are to be expected given the magnitude of the Bureau’s task, they underscore the fact that there would be no economy of scale in ramping up to a full enumeration of Americans abroad. In fact, just the opposite would be true. Because of the inevitability of country-specific problems, rather than conducting a single overseas count based on a standard set of rules and procedures (as is the case with the stateside census), the Bureau might end up administering what amounts to dozens of separate censuses—one for each of the countries it enumerates—each with its own set of procedures adapted to each country’s unique requirements. The time and resources required to do this would likely be overwhelming and detract from the Bureau’s stateside efforts. For example, addressing French privacy laws that restrict the collection of personal data such as race and ethnic information took a considerable amount of negotiation between the two countries, and was ultimately resolved after a formal agreement was developed. Likewise, in Kuwait, delivery of the census materials was delayed by several weeks because they were accidentally addressed to the wrong contractor. The Bureau hired a public relations firm to help market participation in the test. Its responsibilities included identifying private companies, religious institutions, service organizations, and other entities that have contact with Americans abroad and could thus help publicize the census test. Although the public relations firm appeared to go to great lengths to enlist the participation of these various entities—soliciting the support of hundreds of organizations in the three countries—the test revealed the difficulties of adequately overseeing a contractor operating in multiple sites overseas. For example, the public relations firm’s tracking system indicated that around 440 entities had agreed to perform one or more types of promotional activities. However, our on-site inspections of several of these organizations in Paris, France, and Guadalajara, Mexico, that had agreed to display the census materials and/or distribute the questionnaires, uncovered several glitches. Of the 36 organizations we visited that were supposed to be displaying promotional literature, we found the information was only available at 15. In those cases, as shown in Figure 2, the materials were generally displayed in prominent locations, typically on a table with posters on a nearby wall. However, at 21 sites we visited, we found various discrepancies between what the public relations firm indicated had occurred, and what actually took place. For example, while the firm’s tracking system indicated that questionnaires would be available at a restaurant and an English-language bookstore in Guadalajara, none were present. Likewise, in Paris, we went to several locations where the tracking system indicated that census information would be available. None was. In fact, at some of these sites, not only was there no information about the census, but there was no indication that the organization we were looking for resided at the address we had from the database. The Bureau’s longstanding experience in counting the nation’s stateside population has shown that specific operations and procedures together form the building blocks of a successful census. The design of the overseas test—a voluntary survey that relies heavily on marketing to secure a complete count—lacks these building blocks largely because they are impractical to perform in other countries. Thus, the disappointing test results are not surprising. What’s more, refining this basic design or adding more resources would probably not produce substantially better outcomes. The building blocks include the following: Mandatory participation: Under federal law, all persons residing in the United States regardless of citizenship status are required to respond to the stateside decennial census. By contrast, participation in the overseas test was optional. The Bureau has found that response rates to mandatory surveys are higher than the response rates to voluntary surveys. This in turn yields more complete data and helps hold down costs. Early agreement on design: Both Congress and the Bureau need to agree on the fundamental design of the overseas census to help ensure adequate planning, testing and funding levels. The design of the census is driven in large part by the purposes for which the data will be used. Currently, no decisions have been made on whether the overseas data will be used for purposes of congressional apportionment, redistricting, allocating federal funds, or other applications. Some applications, such as apportionment, would require precise population counts and a very rigorous design that parallels the stateside count. Other applications, however, could get by with less precision and thus, a less stringent approach. A complete and accurate address list: The cornerstone of a successful census is a quality address list. For the stateside census, the Bureau goes to great lengths to develop what is essentially an inventory of all known living quarters in the United States, including sending census workers to canvass every street in the nation to verify addresses. The Bureau uses this information to deliver questionnaires, follow up with nonrespondents, determine vacancies, and identify households the Bureau may have missed or counted more than once. Because it would be impractical to develop an accurate address list for overseas Americans, these operations would be impossible and the quality of the data would suffer as a result. Ability to detect invalid returns: Ensuring the integrity of the census data requires the Bureau to have a mechanism to screen out invalid responses. Stateside, the Bureau does this by associating an identification number on the questionnaire to a specific address in the Bureau’s address list, as well as by field verification. However, the Bureau’s current approach to counting overseas Americans is unable to determine whether or not a respondent does in fact reside abroad. So long as a respondent provides certain pieces of information on the census questionnaire, it will be eligible for further processing. The Bureau is unable to confirm the point of origin for questionnaires completed on the Internet, and postmarks on a paper questionnaire only tell the location from which a form was mailed, not the place of residence of the respondent. The Bureau has acknowledged that ensuring such validity might be all but impossible for any reasonable level of effort and funding. Ability to follow up with non-respondents: Because participation in the decennial census is mandatory, the Bureau sends enumerators to those households that do not return their questionnaires. In cases where household members cannot be contacted or refuse to answer all or part of a census questionnaire, enumerators are to obtain data from neighbors, a building manager, or other nonhousehold member presumed to know about its residents. The Bureau also employs statistical techniques to impute data when it lacks complete information on a household. As noted above, because the Bureau lacks an address list of overseas Americans, it is unable to follow-up with nonrespondents or impute information on missing households, and thus, would never be able to obtain a complete count of overseas Americans. Cost model for estimating needed resources: The Bureau uses a cost model and other baseline data to help it estimate the resources it needs to conduct the stateside census. Key assumptions such as response levels and workload are developed based on the Bureau’s experience in counting people decade after decade. However, the Bureau has only a handful of data points with which to gauge the resources necessary for an overseas census, and the tests it plans on conducting will only be of limited value in modeling the costs of conducting a worldwide enumeration in 2010. The lack of baseline data could cause the Bureau to over- or underestimate the staffing, budget, and other requirements of an overseas count. Targeted and aggressive marketing campaign: The key to raising public awareness of the census is an intensive outreach and promotion campaign. As noted previously, the Bureau’s marketing efforts for the 2000 Census were far-reaching, and consisted of more than 250 ads in 17 languages that were part of an effort to reach every household, including those in historically undercounted populations. Replicating this level of effort on a global scale would be both difficult and expensive, and the Bureau has no plans to do so. Field infrastructure to execute census and deal with problems: The Bureau had a vast network of 12 regional offices and 511 local census offices to implement various operations for the 2000 Census. This decentralized structure enabled the Bureau to carry out a number of activities to help ensure a more complete and accurate count, as well as deal with problems when they arose. Moreover, local census offices are an important source of intelligence on the various enumeration obstacles the Bureau faces on the ground. The absence of a field infrastructure for an overseas census means that the Bureau would have to rely heavily on contractors to conduct the enumeration, and manage the entire enterprise from its headquarters in Suitland, Maryland. Ability to measure coverage and accuracy: Since 1980, the Bureau has measured the quality of the decennial census using statistical methods to estimate the magnitude of any errors. The Bureau reports these estimates by specific ethnic, racial, and other groups. For methodological reasons, similar estimates cannot be generated for an overseas census. As a result, the quality of the overseas count, and thus whether the numbers should be used for specific purposes, could not be accurately determined. So far I’ve described the logistical hurdles to counting overseas citizens as part of the census. However, there are a series of policy and conceptual questions that need to be addressed as well. They include: Who should be counted? U.S. citizens only? Foreign-born spouses? Children born overseas? Dual citizens? American citizens who have no intention of ever returning to the United States? Naturalized citizens? What determines residency in another country? To determine who should be included in the stateside census, the Bureau applies its “usual residence rule,” which it defines as the place where a person lives and sleeps most of the time. People who are temporarily absent from that place are still counted as residing there. One’s usual residence is not necessarily the same as one’s voting residence or legal residence. The Bureau has developed guidelines, which it prints on the stateside census form, to help people determine who should and should not be included. The Bureau has not yet developed similar guidance for American citizens overseas. Thus, what should determine residency in another country? Duration of stay? Legal status? Should students spending a semester abroad but who maintain a permanent residence stateside be counted overseas? What about people on business or personal trips who maintain stateside homes? Quality data will require residence rules that are transparent, clearly defined, and consistently applied. How should overseas Americans be assigned to individual states? For certain purposes, such as apportioning Congress, the Bureau would need to assign overseas Americans to a particular state. Should one’s state be determined by the state claimed for income tax purposes? Where one is registered to vote? Last state of residence before going overseas? These and other options all have limitations that would need to be addressed. How should the population data be used? To apportion Congress? To redistrict Congress? To allocate federal funds? To provide a count of overseas Americans only for general informational purposes? The answers to these questions have significant implications for the level of precision needed for the data and, ultimately, the enumeration methodology. Congress will need to decide whether or not to count overseas Americans, and how the results should be used. These decisions, in turn, will drive the methodology for counting this population group. As I’ve already mentioned, no decisions have been made on whether the overseas data will be used for purposes of congressional apportionment, redistricting, allocating federal funds, or other applications. Some uses, such as apportionment, would require precise population counts and a very rigorous design that parallels the stateside count. Other applications do not need as much precision, and thus a less rigorous approach would suffice. The basis for these determinations needs to be sound research on the cost, quality of data, and logistical feasibility of the various options. Possibilities include counting Americans via a separate survey, administrative records such as passport and voter registration forms; and/or records maintained by other countries such as published census records and work permits. The Bureau’s initial research has shown that each of these options has coverage, accuracy, and accessibility issues, and some might introduce systemic biases into the data. Far more extensive research would be needed to determine the feasibility of these or other potential approaches. In summary, the 2004 overseas census test was an extremely valuable exercise in that it showed how counting Americans abroad as an integral part of the decennial census would not be cost-effective. Indeed, the tools and resources available to the Bureau cannot successfully overcome the inherent barriers to counting this population group, and produce data comparable to the stateside enumeration. Further, an overseas census would introduce new resource demands, risks, and uncertainties to a stateside endeavor that is already costly, complex, and controversial. Securing a successful count of Americans in Vienna, Virginia, is challenging enough; a complete count of Americans in Vienna, Austria, and in scores of other countries around the globe, would only add to the difficulties facing the Bureau as it looks toward the next national head count. Consequently, the report we released today suggests that Congress should continue to fund the evaluation of the 2004 test as planned, but eliminate funding for any additional tests related to counting Americans abroad as part of the decennial census. However, this is not to say that overseas citizens should not be counted. Indeed, to the extent that Congress desires better data on the number and characteristics of Americans abroad for various policy-making and other nonapportionment purposes that do not need as much precision, such information does not necessarily need to be collected as part of the decennial census, and could, in fact, be acquired through a separate survey or other means. To facilitate congressional decision-making on this issue, our report recommends that the Bureau, in consultation with Congress, research such options as counting people via a separate survey; administrative records such as passport data; and/or data exchanges with other countries’ statistical agencies subject to applicable confidentiality considerations. Once Congress knows the tradeoffs of these various alternatives, it would be better positioned to provide the Bureau with the direction it needs so that the Bureau could then develop and test an approach that meets congressional requirements at reasonable resource levels. The Bureau agreed with our conclusions and recommendations. Successfully counting the nation’s population is a near-daunting task. As the countdown to the next census approaches the 5-year mark, the question of enumerating Americans overseas is just one of a number of issues the Bureau needs to resolve. On behalf of the Subcommittee, we will continue to assess the Bureau’s progress in planning and implementing the 2010 Census and identify opportunities to increase its cost-effectiveness. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information regarding this testimony, please contact Patricia A. Dalton on (202) 512-6806, or by e-mail at daltonp@gao.gov. Individuals making contributions to this testimony included Jennifer Cook, Robert Goldenkoff, Ellen Grady, Andrea Levine, Lisa Pearson, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Census Bureau (Bureau) has typically excluded from the census private citizens residing abroad, but included overseas members of the military, federal civilian employees, and their dependents (in the 1990 and 2000 Censuses, these individuals were included in the numbers used for apportioning Congress). The Bureau recently tested the practicality of counting all overseas Americans. GAO was asked to testify on the test's initial results. Our statement is based on our published reports, one of which is being released at today's hearing. The test results suggest that counting all American citizens overseas as part of the census would require enormous resources, but still not yield data at the level of quality needed for purposes of congressional apportionment. Participation in the test was poor, with just 5,390 questionnaires returned from the three test sites. Moreover, as the Bureau's experience during the 2000 Census shows, securing better participation in a global count might not be practical. The Bureau spent $374 million on a months-long publicity campaign that consisted of television and other advertising that helped produce a 72-percent return rate. Replicating the same level of effort on a worldwide basis would be difficult, and still would not produce a complete count. Further, the low participation levels in the test made the unit cost of each response relatively high at around $1,450. The test results highlighted other obstacles to a cost-effective count including the resources needed to address country-specific problems and the difficulties associated with managing a complex operation from thousands of miles away. The approach used to count the overseas population in the 2004 test--a voluntary survey that largely relies on marketing to secure a complete count, lacks the basic building blocks of a successful census such as a complete and accurate address list and the ability to follow-up with nonrespondents. As the Bureau already faces the near-daunting task of securing a successful stateside count in 2010, having to simultaneously count Americans abroad would only add to the challenges it faces.
FDA regulates the content of all prescription drug advertising, whether directed to consumers or medical professionals. Advertising that is targeted to consumers includes both DTC and “consumer-directed” materials. DTC advertising includes, for example, broadcast advertisements (such as those on television and radio), print advertisements (such as those in magazines and newspapers), and Internet advertisements (such as consumer advertising on drug companies’ Web sites). In contrast, consumer-directed advertisements are designed to be given by medical professionals to consumers and include, for example, patient brochures provided in doctors’ offices. Advertising materials must contain a “true statement” of information including a brief summary of side effects, contraindications, and the effectiveness of the drug. To meet this requirement, advertising materials must not be false or misleading, must present a fair balance of the risks and benefits of the drug, and must present any facts that are material to the use of the drug or claims made in the advertising. With the exception of broadcast advertisements, materials must present all of the risks described in the drug’s approved labeling. Broadcast materials may present only the major side effects and contraindications, provided the materials make “adequate provision” to give consumers access to the information in the drug’s approved or permitted package labeling. Within FDA, DDMAC is responsible for implementing the laws and regulations that apply to prescription drug advertising. In March 2002, DDMAC created a DTC Review Group, which is responsible for oversight of advertising materials that are directed to consumers. As of May 2008, the group had a total of two group leaders, seven reviewers, and two social scientists. This group’s responsibilities include reviewing final DTC materials and reviewing and providing advisory comments on draft DTC materials. The group also monitors television, magazines, and consumer advertising on drug companies’ Web sites to identify advertising materials that were not submitted to FDA at the time they were first disseminated and reviews advertising materials cited in complaints submitted by competitors, consumers, and others. Once submitted to FDA, final and draft DTC advertising materials are distributed to a DTC reviewer. For final materials, if the reviewer identifies a concern, the agency determines whether it represents a violation and merits a regulatory letter. For draft materials submitted by drug companies, FDA may provide the drug company with advisory comments to consider before the materials are disseminated to consumers if, for example, the reviewers identify claims in materials that could violate applicable laws and regulations. If FDA identifies violations in disseminated DTC materials, the agency may issue two types of regulatory letters—either a “warning letter” or an “untitled letter.” Warning letters are typically issued for violations that may lead FDA to pursue additional enforcement actions if not corrected; untitled letters are issued for violations that do not meet this threshold. Both types of letters cite the type of violation identified in the company’s advertising material, request that the company submit a written response to FDA within 14 days, and request that the company take specific actions. Untitled letters request that companies stop disseminating the cited advertising materials and other advertising materials with the same or similar claims. Warning letters further request that the company issue advertising materials to correct the misleading impressions left by the violative advertising materials. The draft regulatory letters are subsequently reviewed by officials in DDMAC, FDA’s Office of Medical Policy (which oversees DDMAC), and OCC. FDA has stated that it instituted OCC review for the purpose of promoting voluntary compliance by ensuring that drug companies that receive a regulatory letter understand that the letter has undergone legal review and the agency is prepared to go to court if necessary. As of 2006, FDA reviewed a small portion of the increasingly large number of DTC materials it received. FDA attempted to target available resources by focusing its reviews on the DTC advertising materials that had the greatest potential to negatively affect public health, but the agency did not document criteria for prioritizing the materials it received for review. Agency reviewers considered several informal criteria when prioritizing the materials, but these were not systematically applied and the agency did not document if a particular DTC material was reviewed. As a result, the agency could not ensure that it was identifying or reviewing the materials that were the highest priority. FDA officials told us at the time of our 2006 report that the agency received substantially more final and draft materials than the DTC Review Group could review. In 2005, FDA received 4,600 final DTC materials (excluding Internet materials) and 6,168 final Internet materials. FDA also received 4,690 final consumer-directed materials—such as brochures given to consumers by medical professionals. FDA received a steadily increasing number of final materials from 1999 through 2005. We found that, in 2006 and 2007, the total number of final DTC, Internet, and consumer-directed materials FDA received continued to increase. (See fig. 1.) FDA officials estimated that reviewers spent the majority of their time reviewing and commenting on draft materials. However, we were unable to determine the number of final or draft materials FDA reviewed, because FDA did not track this information. In the case of final and draft broadcast materials, FDA officials told us that the DTC group reviewed all of the materials it received; in 2005, it received 337 final and 146 draft broadcast materials. However, FDA did not document whether these or other materials it received had been reviewed. As a result, FDA could not determine how many materials it reviewed in a given year. We recommended in our 2006 report that the agency track which DTC materials had been reviewed. FDA officials indicated to us in May 2008 that the agency still did not track this information. At the time of our 2006 report, FDA officials identified informal criteria that the agency used to prioritize its reviews. FDA officials told us that, to target available resources, the agency prioritized the review of the DTC advertising materials that had the greatest potential to negatively affect public health. We recommended that FDA document its criteria for prioritizing its reviews of DTC advertising materials. FDA informed us in May 2008 that it now has documented criteria to prioritize reviews. For example, its first priority is to review materials with “egregious” violations, such as those identified through complaints. In addition, FDA places a high priority on reviewing television advertising materials. FDA officials also told us that the agency places a high priority on reviewing draft materials because they provide the agency with an opportunity to identify problems and ask drug companies to correct them before the materials are disseminated to consumers. We reported in 2006 that FDA did not systematically apply its criteria for prioritizing reviews to all of the materials that it received. Specifically, we found in 2006 that, at the time FDA received the materials, it recorded information about the drug being advertised and the type of material being submitted but did not screen the DTC materials to identify those that met its various informal criteria. FDA officials told us that the agency did identify all final and draft broadcast materials that it received, but it did not have a system for identifying any other high-priority materials. Absent such a system for all materials, FDA relied on each of the reviewers—in consultation with other DDMAC officials—to be aware of the materials that had been submitted and to accurately apply the criteria to determine the specific materials to review. This created the potential for reviewers to miss materials that the agency would consider to be a high priority for review. Furthermore, because FDA did not track information on its reviews, the agency could not determine whether a particular material had been reviewed. As a result, the agency could not ensure that it identified and reviewed the highest-priority materials. We recommended that the agency systematically screen the DTC materials it received against its criteria to identify those that are the highest priority for review. As of May 2008, FDA still did not have such a process. In 2006 we reported that, after the 2002 policy change requiring legal review by OCC of all draft regulatory letters, the agency’s process for drafting and issuing letters citing violative DTC materials had stretched to several months and FDA had issued fewer regulatory letters per year. As a result of the policy change, draft regulatory letters received additional levels of review and the DTC reviewers who drafted the letters did substantially more work to prepare for and respond to comments resulting from review by OCC. FDA officials told us that the agency issued letters for only the violative DTC materials that it considered the most serious and most likely to negatively affect consumers’ health. Once FDA identified a violation in a DTC advertising material and determined that it merited a regulatory letter, FDA took several months to draft and issue a letter. For letters issued from 2002 through 2005, once DDMAC began drafting the letter for violative DTC materials it took an average of about 4 months to issue the letter. The length of this process varied substantially across these regulatory letters—one letter took around 3 weeks from drafting to issuance, while another took almost 19 months. In comparison, for regulatory letters issued from 1997 through 2001, it took an average of 2 weeks from drafting to issuance. We recommended in 2002 that the agency reduce the amount of time to draft and issue letters and the agency agreed. We found in 2006, however, that the review time had increased and we again urged the agency to issue the letters more quickly. In 2006 and 2007, it took an average of more than 5 months from drafting to issuance. One letter took less than 2 months to issue while another took about 11 months. (See fig. 2 for the average months from 1997 through 2007.) The primary factor that contributed to the increase in the length of FDA’s process for issuing regulatory letters was the additional work that resulted from the 2002 policy change. All DDMAC regulatory letters were reviewed by both OCC staff and OCC’s Chief Counsel. In addition to the time required of OCC, DDMAC officials told us that the policy change created the need for substantially more work on their part to prepare the necessary documentation for legal review. After meeting with OCC and revising the draft regulatory letter to reflect the comments from OCC, DDMAC would formally submit a draft letter to OCC for legal review and approval. OCC often required additional revisions before it would concur that a letter was legally supportable and could be issued. While OCC officials told us that the office had given regulatory letters that cited violative DTC materials higher priority than other types of regulatory letters, their review of DDMAC’s draft regulatory letters was a small portion of their other responsibilities and had to be balanced with other requests, such as the examination of legal issues surrounding the approval of a new drug. Recently, FDA informed us that it now allows some steps to be eliminated—if deemed unnecessary for a particular letter—in an attempt to make the legal review process more efficient. The number of regulatory letters FDA issued per year for violative DTC materials decreased after the 2002 policy change lengthened the agency’s process for issuing letters. From 2002 to 2005, the agency issued between 8 and 11 regulatory letters per year that cited DTC materials. Prior to the policy change, from 1997 through 2001, FDA issued between 15 and 25 letters citing DTC materials per year. An FDA official told us that both the lengthened review time resulting from the 2002 policy change and staff turnover within the DTC Review Group contributed to the decline in the number of issued regulatory letters. More recently, we found that the number of letters issued that cite DTC materials has continued to decline—FDA issued 4 letters in 2006 and 2 letters in 2007. (See fig. 3 for the number of letters issued from 1997 through 2007.) Although the total number of regulatory letters FDA issued for violative DTC materials has decreased, the agency has issued in recent years proportionately more warning letters—which cite violations FDA considers to be more serious. Historically, almost all of the regulatory letters that FDA issued for DTC materials were untitled letters for less serious violations. From 1997 through 2001, FDA issued 98 regulatory letters citing DTC advertising materials, 6 of which were warning letters. From 2002 through 2005, 8 of the 37 regulatory letters were warning letters. Of the 6 letters FDA issued for DTC materials in 2006 and 2007, 4 were warning letters. FDA regulatory letters may cite more than one DTC material or type of violation for a given drug. Of the 19 regulatory letters FDA issued from 2004 through 2005, 7 cited more than 1 DTC material, for a total of 31 different materials. These 31 materials appeared in a range of media, including television, radio, print, direct mail, and the Internet. Further, FDA identified multiple violations in 21 of the 31 DTC materials cited in the letters. The most commonly cited violations related to a failure of the material to accurately communicate information about the safety of the drug. The letters also often cited materials for overstating the effectiveness of the drug or using misleading comparative claims. Of the 6 regulatory letters FDA issued in 2006 or 2007 that cited DTC materials, 2 cited more than 1 DTC material and all identified multiple violations in each of the cited materials. For our 2006 report, FDA officials told us, that the agency issued regulatory letters for DTC materials that it believed were the most likely to negatively affect consumers and that it did not act on all of the concerns that its reviewers identified. For example, they said the agency may be more likely to issue a letter when a false or misleading material was broadly disseminated. When reviewers had concerns about DTC materials, they discussed them with others in DDMAC and may have met with OCC and medical officers in FDA’s Office of New Drugs to determine whether a regulatory letter was warranted. However, because FDA did not document decisions made at the various stages of its review process about whether to pursue a violation, officials were unable to provide us with an estimate of the number of materials about which concerns were raised but the agency did not issue a letter. At the time of our 2006 report, we found that FDA regulatory letters were limited in their effectiveness at halting the dissemination of false and misleading DTC advertising materials. We found that, from 2004 through 2005, FDA issued regulatory letters an average of about 8 months after the violative DTC materials they cited were first disseminated, by which time more than half of the materials had already been discontinued. Although drug companies complied with FDA’s requests to create materials to correct the misimpressions left by the cited materials, these corrections were not disseminated until 5 months or more after FDA issued the regulatory letter. Furthermore, FDA’s regulatory letters did not always prevent drug companies from later disseminating similar violative materials for the same drugs. Because of the length of time it took FDA to issue these letters, violative advertisements were often disseminated for several months before the letters were issued. From 2004 through 2005, FDA issued regulatory letters citing DTC materials an average of about 8 months after the violative materials were first disseminated. FDA issued one letter less than 1 month after the material was first disseminated, while another letter took over 3 years. The cited materials were usually disseminated for 3 or more months, and of the 31 violative DTC materials cited in these letters, 16 were no longer being disseminated by the time the letter was issued. On average, these letters were issued more than 4 months after the drug company stopped disseminating these materials and therefore had no effect on their dissemination. For the 14 DTC materials that were still in use when FDA issued the letter, the drug companies complied with FDA’s request to stop disseminating the violative materials. However, by the time the letters were issued, these 14 materials had been disseminated for an average of about 7 months. As requested by FDA in the regulatory letters, drug companies often identified and stopped disseminating other materials with claims similar to those in the violative materials. For 18 of the 19 regulatory letters issued from 2004 through 2005, the drug companies indicated to FDA that they had either identified additional similar materials or that they were reviewing all materials to ensure compliance. In addition to halting materials directed to consumers, companies responding to 11 letters also stopped disseminating materials with similar claims that were targeted directly to medical professionals. Drug companies disseminated the corrective advertising materials requested in FDA warning letters, but took 5 months or more to do so. In each of the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the agency asked the drug company to disseminate truthful, nonmisleading, and complete corrective messages about the issues discussed in the regulatory letter to the audiences that received the violative promotional materials. In each case, the drug company complied with this request by disseminating corrective advertising materials. For the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the corrective advertising materials were initially disseminated more than 5 to almost 12 months after FDA issued the letter. For example, for one allergy medication, the violative advertisements ran from April through October 2004, FDA issued the regulatory letter in April 2005, and the corrective advertisement was not issued until January 2006. FDA regulatory letters did not always prevent the same drug companies from later disseminating violative DTC materials for the same drug, sometimes using the same or similar claims. From 1997 through 2005, FDA issued regulatory letters for violative DTC materials used to promote 89 different drugs. Of these 89 drugs, 25 had DTC materials that FDA cited in more than one regulatory letter, and one drug had DTC materials cited in eight regulatory letters. For 15 of the 25 drugs, FDA cited similar broad categories of violations in multiple regulatory letters. For example, FDA issued regulatory letters citing DTC materials for a particular drug in 2000 and again in 2005 for “overstating the effectiveness of the drug.” For 4 of the 15 drugs, FDA cited the same specific violative claim for the same drug in more than one regulatory letter. For example, in 1999 FDA cited a DTC direct mail piece for failing to convey important information about the limitations of the studies used to approve the promoted drug. In 2001, FDA cited a DTC broadcast advertisement for the same drug for failing to include that same information. Given substantial growth in the number of DTC advertising materials submitted to FDA in recent years, FDA’s role in limiting the dissemination of false or misleading advertising to the American public has become increasingly important. Fulfilling this responsibility requires that the agency, among other things, review those DTC advertising materials that are highest priority and take timely action to limit the dissemination of those that are false or misleading. We found in 2006 that FDA did not have a complete and systematic process for tracking and prioritizing all materials that it received for review. FDA’s development of documented criteria to prioritize reviews is a step in the right direction. However, as we recommended in 2006, we believe that FDA should take the next step of systematically applying those criteria to the DTC materials it receives to determine which are highest priority for review. While the agency said that it would require vastly increased staff to systematically screen materials, we found in 2006 that FDA already has most of the information it would need to do so. Finally, despite FDA agreeing in 2002 that it is important to issue regulatory letters more quickly, the amount of time it takes to draft and issue letters has continued to lengthen. We believe that delays in issuing regulatory letters limit FDA’s effectiveness in overseeing DTC advertising and in reducing consumers’ exposure to false and misleading advertising. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other members of the subcommittee may have at this time. For further information about this statement, please contact Marcia Crosse, at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Martin T. Gahart, Assistant Director; Chad Davenport; William Hadley; Cathy Hamann; Julian Klazkin; and Eden Savino made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Food and Drug Administration (FDA) is responsible for overseeing direct-to-consumer (DTC) advertising of prescription drugs, which includes a range of media, such as television, magazines, and the Internet. If FDA identifies a violation of laws or regulations in a DTC advertising material, the agency may issue a regulatory letter asking the drug company to take specific actions. In 2002, GAO reported on delays in FDA's issuance of regulatory letters. GAO was asked to discuss trends in FDA's oversight of DTC advertising and the actions FDA has taken when it identifies violations. This statement is based on GAO's 2006 report, Prescription Drugs: Improvements Needed in FDA's Oversight of Direct-to-Consumer Advertising, GAO-07-54 (November 16, 2006). In this statement, GAO discusses the (1) DTC advertising materials FDA reviews, (2) FDA's process for issuing regulatory letters citing DTC advertising materials and the number of letters issued, and (3) the effectiveness of FDA's regulatory letters at limiting the dissemination of false or misleading DTC advertising. For its 2006 report, GAO examined FDA data on the advertising materials the agency received and reviewed the regulatory letters it issued citing prescription drug promotion from 1997 through 2005. For this statement, GAO also reviewed data from FDA to update selected information from the 2006 report. Since 1999, FDA has received a steadily increasing number of advertising materials directed to consumers. In 2006, GAO found that FDA reviewed a small portion of the DTC materials it received, and the agency could not ensure that it was identifying for review the materials it considered to be highest priority. While FDA officials told GAO that the agency prioritized the review of materials that had the greatest potential to negatively affect public health, the agency had not documented criteria to make this prioritization. GAO recommended that FDA document and systematically apply criteria for prioritizing its reviews of DTC advertising materials. In May 2008, FDA indicated that it had documented criteria to prioritize reviews. However, FDA still does not systematically apply its criteria to all of the DTC materials it receives. Furthermore, GAO noted in its 2006 report that FDA could not determine whether a particular material had been reviewed. GAO recommended in that report that the agency track which DTC materials had been reviewed. FDA officials indicated to GAO in May 2008 that the agency still did not track this information. As a result, the agency cannot ensure that it is identifying and reviewing the highest-priority materials. GAO found in 2006 that, since a 2002 policy change requiring legal review of all draft regulatory letters, FDA's process for drafting and issuing letters was taking longer and the agency was issuing fewer letters per year. FDA officials told GAO that the policy change contributed to the lengthened review. In 2006, GAO found that the effectiveness of FDA's regulatory letters at halting the dissemination of violative DTC materials had been limited. By the time the agency issued regulatory letters, drug companies had already discontinued use of more than half of the violative advertising materials identified in each letter. In addition, FDA's issuance of regulatory letters had not always prevented drug companies from later disseminating similar violative materials for the same drugs.
Drug applications—including NDAs, BLAs, and efficacy supplements— are reviewed primarily by FDA’s Center for Drug Evaluation and Research (CDER), with a smaller proportion reviewed by the Center for Prior to submission of an Biologics Evaluation and Research (CBER). When we refer to consumer advocacy groups, we are referring to groups that advocate on behalf of consumers and patients. application, sponsors may choose to seek accelerated approval status if the drug is intended to treat a serious or life-threatening illness (such as cancer) and has the potential to provide meaningful therapeutic benefit to patients over existing treatments. Sponsors of a drug with accelerated approval status may be granted approval on the basis of clinical trials conducted using a surrogate endpoint—such as a laboratory measurement or physical sign—as an indirect or substitute measurement for a clinically meaningful outcome such as survival. According to FDA, the agency generally also speeds its review of drug applications with accelerated approval status by granting them priority review, although priority review can also be granted to an application without accelerated approval status. FDA grants priority review for applications that it expects, if approved, would provide significant therapeutic benefits, compared to available drugs, in the treatment, diagnosis, or prevention of a disease. Applications for which there are no perceived significant therapeutic benefits beyond those for available drugs are granted standard review. See 21 U.S.C. § 355(d); 42 U.S.C. § 262(j). prevent FDA from approving the application. In response, sponsors can submit additional information to FDA in the form of amendments to the application. Certain applications are also subject to review by an independent advisory committee. FDA convenes advisory committees to provide independent expertise and technical assistance to help the agency make decisions about drug products. Additionally, FDA might require the sponsor to submit a Risk Evaluation and Mitigation Strategy (REMS) for the drug under review to ensure that the benefits of the drug outweigh its risks. FDA review time for an original application is calculated as the time elapsed from the date FDA receives the application and associated user fee to the date it issues an action letter; it is calculated using only the first review cycle and therefore does not include any time that may elapse while FDA is waiting for a sponsor to respond to FDA’s first-cycle action letter or any review time that elapses during subsequent review cycles. In order to close the review cycle for NDAs, BLAs, and efficacy supplements, FDA must complete its review and issue an approval letter, a denial letter, or a “complete response” letter (i.e., a letter delineating any problems FDA identified in the application that prevented it from being approved). The review cycle will also be closed if the application is withdrawn by the sponsor. The date on which one of these actions occurs is used to determine whether the review was completed within the PDUFA goal time frame. If FDA issues a complete response letter, the sponsor may choose to submit a revised application to FDA. These are known as resubmissions and their review is covered under the user fee paid with the original submission. Resubmissions are classified as Class 1 or Class 2 according to the complexity of the information they contain, with Class 2 being the most complex. Although the prescription drug performance goals have continued to evolve with each reauthorization of the prescription drug user fee program, the goals for NDAs, BLAs, and efficacy supplements have remained fairly stable for recent cohorts—a cohort being comprised of all the submissions of a certain type filed in the same fiscal year (see table 1). For standard NDAs, BLAs, and efficacy supplements, the current goal was phased in until it reached the current level (90 percent of reviews completed within 10 months) in FY 2002. Similarly, the goal for Class 1 NDA and BLA resubmissions was phased in, reaching its current level of 90 percent of reviews completed within 2 months in FY 2001. FDA can extend the review time frame for NDAs, BLAs, or Class 2 resubmissions by 3 months if it receives a major amendment to the application from the sponsor within 3 months of the goal date. FDA met most of its performance goals for priority and standard original NDA and BLA submissions for the FYs 2000 through 2010 cohorts. However, the average FDA review time increased slightly during this period for both priority and standard NDAs and BLAs. The percentage of FDA first-cycle approvals for both priority and standard NDAs and BLAs generally increased from FY 2000 through FY 2010; however, the percentage of first-cycle approvals has decreased for priority NDAs and BLAs since FY 2007. FDA met most of its performance goals for priority and standard original NDA and BLA submissions during our analysis period by issuing the proportion of action letters specified in the performance goals within the goal time frames. Specifically, for priority original NDAs and BLAs, FDA met the performance goals for 10 of the 11 completed cohorts we examined (see fig. 1). FDA also met the performance goals for 10 of the 11 completed standard NDA and BLA cohorts we examined. However, FDA did not meet the goals (i.e., issue the specified proportion of action letters within the goal time frames) for priority or standard NDAs and BLAs in the FY 2008 cohort. FDA and industry stakeholders we interviewed suggested that the reason FDA did not meet the goals for this cohort was that extra time was required for implementation of REMS requirements, which were introduced as part of the implementation of FDAAA. Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, FDA was meeting the goals for both priority and standard original NDAs and BLAs on which it had taken action. For the subset of priority NDAs and BLAs that were for innovative drugs, FDA met the performance goals for 9 of the 11 completed cohorts—all cohorts except FYs 2008 and 2009. For the subset of standard NDAs and BLAs that were for innovative drugs, FDA also met the performance goals for 9 of the 11 completed cohorts—all cohorts except FYs 2007 and 2008. For the incomplete FY 2011 cohort, FDA was meeting the goals for the subsets of both priority and standard NDAs and BLAs that were for innovative drugs. If FDA issues a complete response letter to the sponsor noting deficiencies with the original submission, the sponsor can resubmit the application with the deficiencies addressed. For Class I NDA and BLA resubmissions, FDA met the performance goals for 8 of the 11 completed cohorts we examined. For Class 2 NDA and BLA resubmissions, FDA met the performance goals for 10 of the 11 completed cohorts we examined. Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, FDA was meeting the goals for both the Class 1 resubmissions and the Class 2 resubmissions on which it had taken action. Overall, average FDA review time—the time elapsed from when FDA received a submission until it issued an action letter—increased slightly from FY 2000 through FY 2010 for both priority and standard NDAs and BLAs. There was a larger increase in average review time for both types of applications beginning in FY 2006. However, average review time began decreasing after FY 2007 for standard applications and after FY 2008 for priority applications, bringing the review times back near the FY 2000 levels (see fig. 2). As mentioned previously, FDA and industry stakeholder groups noted the implementation of REMS requirements as a contributing factor to increased review times for the FY 2008 cohort. Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, average FDA review time for applications on which FDA had taken action was 186 days for priority NDAs and BLAs and 308 days for standard NDAs and BLAs. Trends in average FDA review time for the subset of NDAs and BLAs that were for innovative drugs were similar to trends for all priority or standard NDAs and BLAs. For the subset of priority NDAs and BLAs that were for innovative drugs, average FDA review times were sometimes longer and sometimes shorter than those for all priority NDAs and BLAs; review times for the subset of standard NDAs and BLAs that were for innovative drugs were generally slightly longer than review times for all standard NDAs and BLAs. We were unable to calculate the average time to final decision for original NDAs and BLAs—that is, the average time elapsed between submission of an application and the sponsor’s withdrawal of the application or FDA’s issuance of an approval or denial action letter in the last completed review cycle. Time to final decision includes FDA review time as well as time that elapsed between review cycles while FDA was waiting for the sponsor to resubmit the application. We were unable to complete this calculation because most cohorts were still open for these purposes (i.e., fewer than 90 percent of submissions had received a final action such as approval, denial, or withdrawal). Specifically, for priority NDAs and BLAs, only four cohorts (FYs 2001, 2002, 2005, and 2006) had at least 90 percent of submissions closed, and for standard NDAs and BLAs, only one cohort (FY 2002) had at least 90 percent of submissions closed. (See app. I, table 4 for details.) As a result, there were too few completed cohorts available to calculate the time to final decision in a meaningful way. FDA may opt to consider an application withdrawn (and thus closed) if the sponsor fails to resubmit the application within 1 year after FDA issues a complete response letter. When we examined the open applications using this criterion, we identified 194 open NDAs and BLAs in FYs 2000 through 2010 for which FDA had issued a complete response letter in the most recent review cycle but had not yet received a resubmission from the sponsor. FDA had issued the complete response letter more than 1 year earlier for 162 (84 percent) of these applications. The percentage of priority NDAs and BLAs receiving an approval letter at the end of the first review cycle exhibited a sharp 1-year decline from FY 2000 to FY 2001, then increased substantially from FY 2001 through FY 2007, before decreasing again from FY 2007 through FY 2010 (see fig. 3). The percentage of first-cycle approvals for standard NDAs and BLAs showed a similar 1-year decline from FY 2000 to FY 2001, then varied somewhat but generally increased from FY 2002 through FY 2010. Although review of the FY 2011 cohort was incomplete at the time we received FDA’s data, 93 percent of the priority NDAs and BLAs that had received a first-cycle action letter had been approved, as had 42 percent of the standard NDAs and BLAs. Trends for FYs 2000 through 2010 in the percentage of first-cycle approvals were similar for the subset of NDAs and BLAs that were for innovative drugs when compared to trends for all priority or standard NDAs and BLAs. For the subset of priority NDAs and BLAs for innovative drugs, the percentage of first-cycle approvals was generally higher than for all priority NDAs and BLAs. For standard submissions, the percentage of first-cycle approvals for innovative drugs was generally lower than for all standard NDAs and BLAs; for some cohorts (e.g., FYs 2000, 2004– 2006, and 2008) this difference was substantial. FDA met most of its performance goals for priority and standard original efficacy supplements to approved NDAs and BLAs for the FYs 2000 through 2010 cohorts. However, the average FDA review time generally increased during this period for both priority and standard efficacy supplements. The percentage of FDA first-cycle approvals fluctuated for priority efficacy supplements but generally increased for standard efficacy supplements for the FYs 2000 through 2010 cohorts. FDA met most of its performance goals for efficacy supplements to approved NDAs and BLAs during our analysis period. Specifically, FDA met the performance goals for both priority and standard efficacy supplements for 10 of the 11 completed cohorts we examined (see fig. 4). Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, based on efficacy supplements on which it had taken action, FDA was meeting the goal for both priority and standard efficacy supplements. Average FDA review time generally increased during our analysis period for both priority and standard efficacy supplements. Specifically, average FDA review time for priority efficacy supplements increased from 173 days in the FY 2000 cohort to a peak of 205 days in the FY 2009 cohort and then fell in the FY 2010 cohort to 191 days (see fig. 5). For standard efficacy supplements, average FDA review time rose from 285 days in the FY 2000 cohort to a peak of 316 days in the FY 2008 cohort and then fell in the FY 2010 cohort to 308 days. Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, average FDA review time for efficacy supplements on which FDA had taken action was 195 days for priority submissions and 284 days for standard submissions. As with NDA and BLA submissions, we were unable to calculate the average time to final decision for efficacy supplements in any meaningful way because there were too few completed cohorts. Specifically, for priority efficacy supplements, only four cohorts (FYs 2000, 2001, 2004, and 2007) had at least 90 percent of submissions closed, and for standard efficacy supplements, only one cohort (FY 2005) had at least 90 percent of submissions closed. (See app. II, table 9 for details.) FDA may opt to consider an application withdrawn (and thus closed) if the sponsor fails to resubmit the application within 1 year after FDA issues a complete response letter. When we examined the open applications using this criterion, we identified 196 open efficacy supplements in FYs 2000 through 2010 for which FDA had issued a complete response letter in the most recent review cycle but had not yet received a resubmission from the sponsor. FDA had issued the complete response letter more than 1 year earlier for 168 (86 percent) of these submissions. The percentage of priority efficacy supplements receiving an approval decision at the end of the first review cycle fluctuated for FYs 2000 through 2010, ranging between 47 percent and 80 percent during this time (see fig. 6). The results for standard efficacy supplements showed a steadier increase than for priority submissions. Specifically, the percentage of first-cycle approvals rose from 43 percent in the FY 2000 cohort to 69 percent in the FY 2010 cohort. Although the FY 2011 cohort was still incomplete at the time we received FDA’s data, 63 percent of first-cycle action letters for standard submissions and 92 percent of first-cycle action letters for priority submissions issued by that time were approvals. The industry groups and consumer advocacy groups we interviewed noted a number of issues related to FDA’s review of prescription drug applications. The most commonly mentioned issues raised by industry and consumer advocacy stakeholder groups were actions or requirements that stakeholders believe can increase review times and insufficient communication between FDA and stakeholders throughout the review process. Industry stakeholders also noted a lack of predictability and consistency in reviews. Consumer advocacy group stakeholders noted issues related to inadequate assurance of the safety and efficacy of approved drugs. FDA is taking steps that may address many of these issues. Most of the seven stakeholder groups we interviewed told us that there are actions and requirements that can lengthen FDA’s review process. For example, four of the five consumer advocacy group stakeholders noted that FDA does not require sponsors to submit electronic applications; three of these stakeholders noted that requiring electronic applications could make the review process faster. Additionally, the two industry stakeholders told us that they believe FDA should approve more applications during the first review cycle. We found that an average of 44 percent of all original NDAs and BLAs submitted in FYs 2000 through 2010 were approved during the first review cycle, while 75 percent were ultimately approved. In addition, the two industry stakeholders that we interviewed raised requirements that can make review times longer, but the consumer advocacy group stakeholders did not agree with these points. For example, both industry stakeholders noted that working out the implementation of REMS requirements introduced in FDAAA slowed FDA’s review process. One industry stakeholder stated that discussions about REMS often happened late in the review process, resulting in an increase in review times; another noted that REMS requirements have not been standardized, contributing to longer review times. In contrast, one consumer advocacy group stakeholder that we interviewed suggested that standardized REMS requirements or a “one size fits all” approach would not be meaningful as a risk management strategy. The industry and consumer advocacy group stakeholders also disagreed on another issue that can potentially lengthen the review process—FDA’s process for using outside scientific expertise for the review of applications. The two industry stakeholders we interviewed stated that the rules surrounding consultation with an advisory committee— particularly those related to conflicts of interest—can extend the time it takes FDA to complete the review process. In contrast, two of the consumer advocacy group stakeholders we interviewed specifically stated that FDA should be concerned with issues of conflict of interest in advisory committees used during the drug review process. FDA has taken or plans to take several steps that may address issues stakeholders noted can lengthen the review process, including issuing new guidance, commissioning and issuing assessments of the review process, training staff, and establishing programs aimed at helping sponsors. For example, according to the draft agreement with industry for the upcoming prescription drug user fee program reauthorization, FDA would issue guidance on the standards and format for submitting electronic applications and would begin tracking and reporting on the number of electronic applications received. In addition, according to the draft agreement, FDA would publish both an interim and a final assessment of the review process for innovative drugs and then hold public meetings for stakeholders to present their views on the success of the program, including its effect on the efficiency and effectiveness of first-cycle reviews. FDA would also provide training to staff on reviewing applications containing complex scientific issues, which may improve FDA’s ability to grant first-cycle approvals where appropriate. In addition, FDA would issue guidance on assessing the effectiveness of REMS for a particular drug and would hold public meetings to explore strategies to standardize REMS, where appropriate. However, we did not identify any examples of steps FDA has taken to address industry stakeholder issues with leveraging outside expertise during the drug review process in any of the recently released strategy, assessment, and guidance documents we reviewed. Most of the two industry and five consumer advocacy group stakeholders that we interviewed told us that there is insufficient communication between FDA and stakeholders throughout the review process. For example, both of the industry stakeholders noted that FDA does not clearly communicate the regulatory standards that it uses to evaluate applications. In particular, the industry stakeholders noted that the regulatory guidance documents issued by FDA are often out of date or the necessary documents have not yet been developed. Additionally, both industry stakeholders and two consumer advocacy group stakeholders noted that after sponsors submit their applications, insufficient communication from FDA prevents sponsors from learning about deficiencies in their applications early in FDA’s review process. According to these four stakeholders, if FDA communicated these deficiencies earlier in the process, sponsors would have more time to address them; this would increase the likelihood of first-cycle approvals. Finally, three consumer advocacy group stakeholders also noted that FDA does not sufficiently seek patient input during reviews. One stakeholder noted that it is important for FDA to incorporate patient perspectives into its reviews of drugs because patients might weigh the benefits and risks of a certain drug differently than FDA reviewers. FDA has taken or plans to take several steps that may address stakeholders’ issues with the frequency and quality of its communications with stakeholders, including conducting a review of its regulations, establishing new review programs and communication-related performance goals, providing additional staff training, and increasing its efforts to incorporate patient input into the review process. FDA is in the process of reviewing its regulations to identify burdensome, unclear, obsolete, ineffective, or inefficient regulations and is soliciting stakeholder input on additional rules that could be improved. In addition, according to the draft agreement with industry, FDA would establish a review model with enhanced communication requirements for innovative drugs, including requirements to hold pre- and late-cycle submission meetings with sponsors as well as to update sponsors following FDA’s internal midcycle review meetings. Additionally, under the draft user fee agreement, FDA would inform sponsors of the planned review timeline and any substantive review issues identified thus far within 74 days of receipt for 90 percent of original NDAs, BLAs, and efficacy supplements. FDA would also issue guidance, develop a dedicated drug development training staff, and provide training on communication for all CDER staff involved in the review of investigational new drugs.increase its utilization of patient representatives as consultants to provide patient views early in the product development process and to ensure those perspectives are considered in regulatory discussions. More specifically, FDA would expect to start with a selected set of disease areas and meet with the relevant patient advocacy groups and other interested stakeholders to determine how to incorporate patient perspectives into FDA’s decision making. The two industry stakeholders that we interviewed also told us that there is a lack of predictability and consistency in FDA’s reviews of drug applications. For example, both stakeholders noted that there is sometimes inconsistent application of criteria across review divisions or offices. Further, both industry stakeholders we interviewed noted that FDA lacks a structured benefit-risk framework to refer to when making decisions, which they believe would improve the predictability of the review process. FDA has taken or plans to take steps that may address stakeholders’ issues with the predictability and consistency of its reviews of drug applications. For example, FDA plans to provide training related to the development, review, and approval of drugs for rare diseases, which may help to improve the consistency of FDA’s review of those drugs. In addition, FDA has appointed a Deputy Commissioner for Medical Products to oversee and manage CBER, CDER, and the Center for Devices and Radiological Health (CDRH) in an attempt to improve integration and consistency between the centers. Furthermore, FDA has agreed to create a 5-year plan to develop and implement a structured benefit-risk framework in the review process. FDA will also revise its internal guidance to incorporate a structured benefit-risk framework and then train its review staff on these revisions. Three of the five consumer advocacy group stakeholders that we spoke with raised issues about whether FDA is adequately ensuring the safety and efficacy of the drugs it approves for marketing. All three of these stakeholders told us that FDA should place greater priority on safety and efficacy over review speed. In addition, three stakeholders told us that FDA does not gather enough data on long-term drug safety and efficacy through methods such as postmarket surveillance. One stakeholder suggested that FDA should more effectively utilize its Sentinel System for adverse event reporting. These concerns have also been extensively discussed elsewhere. FDA has taken or plans to take steps that may address stakeholders’ issues with the safety and efficacy of approved drugs, including publishing a regulatory science strategic plan. This document describes various plans FDA has for emphasizing safety and efficacy, such as developing assessment tools for novel therapies, assuring safe and effective medical innovation, and integrating complex data (including postmarket data) to allow for better analyses. FDA has also published a report identifying needs that, if addressed, would enhance scientific decision making in CDER. Some of the needs identified included improving access to postmarket data sources and exploring the feasibility of different postmarket analyses; improving risk assessment and management strategies to reinforce the safe use of drugs; and developing and improving predictive models of safety and efficacy in humans. Finally, in the draft agreement with industry, FDA has committed to conducting both an interim and a final assessment of the strengths, limitations, and appropriate use of the Sentinel System for helping FDA determine the regulatory actions necessary to manage safety issues. FDA met most of the performance goals for the agency to review and issue action letters for original NDA and BLA submissions, Class 1 and Class 2 resubmissions, and original efficacy supplements for the FYs 2000 through 2010 cohorts. FDA review times increased slightly for original NDAs, BLAs, and efficacy supplements during this period while changes in the percentage of first-cycle approvals varied by application type. While FDA has met most of the performance goals we examined, stakeholders we spoke with point to a number of issues that the agency could consider to improve the drug review process; FDA is taking or has agreed to take steps that may address these issues, such as issuing new guidance, establishing new communication-related performance goals, training staff, and enhancing scientific decision making. It is important for the agency to continue monitoring these efforts in order to increase the efficiency and effectiveness of the review process and thereby help ensure that safe and effective drugs are reaching the market in a timely manner. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix IV. HHS generally agreed with our findings and noted that they reflect what the agency reported for the same time period. HHS also called attention to activities FDA has undertaken to improve the prescription drug review process. It highlighted FDA’s performance in approving innovative drugs in FY 2011. HHS also noted steps FDA will take to contribute to medical product innovation including expediting the drug development pathway and streamlining and reforming FDA regulations. Finally, HHS discussed enhancements to the drug review program that were included in the proposed recommendations for the 2012 reauthorization of the prescription drug user fee program, such as establishing a new review program for innovative drugs, enhancing benefit-risk assessment, and requiring electronic submissions and standardization of electronic application data to improve efficiency. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Includes only those submissions that had received a final FDA action letter (i.e., approval) in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Includes only those submissions that had received a final FDA action letter (i.e., approval) in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. We defined a submission as open if the most recent review cycle was still underway (i.e., pending) or if FDA had issued a complete response letter in the most recent review cycle and the sponsor still had the option of resubmitting the application under the original user fee. Submissions that have received a complete response letter are considered complete for purposes of determining whether FDA met the PDUFA performance goals, but the review is not closed. Prior to August 2008, FDA also issued “approvable” and “not approvable” letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Yes The FY 2011 cohort was not complete at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Therefore, values indicated for FY 2011 in the table above may change as these reviews are completed. Our analysis was limited to resubmissions made in FYs 2000 through 2011 for original NDAs and BLAs that were also submitted in FYs 2000 through 2011. Resubmissions made in FYs 2000 through 2011 for original NDAs and BLAs submitted prior to FY 2000 were not captured by our analysis. “I.D.” stands for innovative drugs, a subset of all priority original NDAs and BLAs that includes nearly all BLAs and those NDAs designated as new molecular entities (NMEs). In FY 2000, Class 1 resubmissions were also subject to a 4-month goal time frame which is not shown in our analysis. Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames for Class 2 resubmissions can be extended for 3 months if the sponsor submits a major amendment to the resubmission within 3 months of the goal date. For Class 2 NDA/BLA resubmissions in these cohorts, 45 out of 463 submissions (9.7 percent) received goal extensions. For the FY 2011 cohort, 55 out of 79 standard NDA and BLA submissions (70 percent) and 7 out of 22 priority submissions (32 percent) were still under review at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Therefore, values indicated for FY 2011 in the table above may change as these reviews are completed. Includes only those submissions that had received an approval letter in their last completed review cycle at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Dashes (—) indicate cohorts for which no submissions met the criteria. Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames can be extended for 3 months if the sponsor submits a major amendment to the application within 3 months of the goal date. For priority efficacy supplements in FYs 2000 through 2011, 24 out of 400 submissions (6 percent) received PDUFA goal extensions. Average review time for the first review cycle for original submissions. Resubmissions are subject to different PDUFA goal time frames. Dashes (—) indicate cohorts for which no submissions met the criteria. Includes only those submissions that had received a first-cycle FDA action letter at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Prior to August 2008, FDA also issued “approvable” and “not approvable” letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Includes only those submissions that had received a final FDA approval letter in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. 17 Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames can be extended for 3 months if the sponsor submits a major amendment to the application within 3 months of the goal date. For standard efficacy supplements in FYs 2000 through 2011, 90 out of 1,528 submissions (6 percent) received PDUFA goal extensions. FYs 2008 through 2011 calculations exclude submissions for which FDA had not yet issued an action letter. Includes only those submissions that had received a first-cycle FDA action letter at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. Prior to August 2008, FDA also issued “approvable” and “not approvable” letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Includes only those submissions that had received a final FDA approval letter in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA’s data, which include reviews by CBER and CDER through November 30, 2011. FDA centers and offices Center for Drug Evaluation and Research (CDER) Office of the Center Director (OCD) Office of Information Technology (OIT/OIM) Office of Planning and Informatics (OPI) Office of Counter-Terrorism and Emergency Coordination (OCTEC) Office of Pharmaceutical Science (OPS) FDA centers and offices Office of Regulatory Affairs (ORA) Office of the Commissioner (OC) Shared Service (SS) SS FTEs were not separated from the center FTEs until FY 2004. In addition to the contact named above, Robert Copeland, Assistant Director; Carolyn Fitzgerald; Cathleen Hamann; Karen Howard; Hannah Marston Minter; Lisa Motley; Aubrey Naffis; and Rachel Schulman made key contributions to this report.
The Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS) is responsible for overseeing the safety and efficacy of drugs and biologics sold in the United States. New drugs and biologics must be reviewed by FDA before they can be marketed, and the Prescription Drug User Fee Act (PDUFA) authorizes FDA to collect user fees from the pharmaceutical industry to support its review of prescription drug applications, including new drug applications (NDA), biologic license applications (BLA), and efficacy supplements that propose changes to the way approved drugs and biologics are marketed or used. Under each authorization of PDUFA since 1992, FDA committed to performance goals for its drug and biologic reviews. In preparation for the next PDUFA reauthorization, GAO was asked to examine FDA’s drug and biologic review processes. In this report, we (1) examine trends in FDA’s NDA and BLA review performance for fiscal years (FY) 2000 through 2010, (2) examine trends in FDA’s efficacy supplement review performance for FYs 2000 through 2010, and (3) describe issues stakeholders have raised about the drug and biologic review processes and steps FDA is taking that may address these issues. To do this work, GAO examined FDA drug and biologic review data, reviewed FDA user fee data, interviewed FDA officials, and interviewed two industry groups and five consumer advocacy groups. All of the stakeholder groups participated in at least half of the meetings held by FDA to discuss the reauthorization of the prescription drug user fee program. FDA met most performance goals for priority and standard NDAs and BLAs received from FY 2000 through FY 2010. FDA meets its performance goals by completing its review and issuing an action letter—such as an approval or a response detailing deficiencies that are preventing the application from being approved—for a specified percentage of applications within a designated period of time. FDA designates NDAs and BLAs as either priority—if the product would provide significant therapeutic benefits when compared to available drugs—or standard. FDA met the performance goals for both priority and standard NDAs and BLAs for 10 of the 11 fiscal years GAO examined; FDA did not meet either of the goals for FY 2008. Although FDA had not yet issued an action letter for all of the applications it received in FY 2011 and results are therefore preliminary, FDA was meeting the goals for both priority and standard NDAs and BLAs on which it had taken action. Meanwhile, FDA review time for NDAs and BLAs—the time elapsed between FDA’s receipt of an application and issuance of an action letter—increased slightly from FY 2000 through FY 2010. In addition, the percentage of NDAs and BLAs receiving an approval letter at the end of the first review cycle generally increased, although that percentage has decreased for priority NDAs and BLAs since FY 2007. FDA met most of its performance goals for efficacy supplements from FY 2000 through FY 2010. Specifically, FDA met the performance goals for both priority and standard efficacy supplements for 10 of the 11 fiscal years GAO examined. FDA review time generally increased during the analysis period for both priority and standard efficacy supplements. The percentage of priority efficacy supplements receiving an approval letter at the end of the first review cycle fluctuated from FY 2000 through FY 2010, ranging between 47 percent and 80 percent during this time. The results for standard efficacy supplements showed a steadier increase with the percentage of first-cycle approval letters rising from 43 percent for FY 2000 applications to 69 percent for FY 2010 applications. The industry groups and consumer advocacy groups we interviewed noted a number of perceived issues related to FDA’s review of drug and biologic applications. The most commonly mentioned issues raised by industry and consumer advocacy stakeholder groups were actions or requirements that can increase review times (such as taking more than one cycle to approve applications) and insufficient communication between FDA and stakeholders throughout the review process. Industry stakeholders also noted a perceived lack of predictability and consistency in reviews. Consumer advocacy group stakeholders noted issues related to inadequate assurance of the safety and effectiveness of approved drugs. FDA is taking steps that may address many of these issues, including issuing new guidance, establishing new communication-related performance goals, training staff, and enhancing scientific decision making. In commenting on a draft of this report, HHS generally agreed with GAO’s findings and noted that they reflect what the agency reported for the same time period. HHS also called attention to activities FDA has undertaken to improve the prescription drug review process.
Puerto Rico is an island about 1,000 miles southeast of Miami, Florida, and relies heavily on oceangoing vessels to move large volumes of goods to and from the island. Puerto Rico has maintained a strong trade relationship with U.S. suppliers and imports significantly more in trade volume, by weight, than it exports back to the United States. Of the total volume of trade between the United States and Puerto Rico in 2011, about 85 percent was shipped from the United States to Puerto Rico, while 15 percent went from Puerto Rico to the United States. Goods imported to Puerto Rico from the United States are primarily consumer goods, although 8 of the top 10 goods by volume imported into Puerto Rico are raw materials related to the manufacturing of pharmaceuticals and medical devices. Puerto Rico’s major exports back to the United States are typically high-value finished products, particularly pharmaceutical products and medical devices. While trade between Puerto Rico and the United States is significant, Puerto Rico imports more by volume from foreign countries than from the United States, primarily due to imports of petroleum products. The Jones Act is one of the cabotage (also known as “coastwise”) laws of the United States and applies to cargo shipped by waterborne transportation between two U.S. points. Cabotage laws are designed to limit the domestic transport of goods and passengers to a country’s national flagged vessels. According to the Department of Transportation’s (DOT) Maritime Administration (MARAD), under the Jones Act, all domestic water transportation providers compete under uniform laws and regulations, creating an even playing field. The United States is not alone in establishing and enforcing cabotage laws. Most trading nations of the world, according to MARAD, have or have had cabotage laws of some kind. Furthermore, these types of laws are not unique to the maritime industry, but U.S. cabotage provisions apply, in some form or degree, to other transportation modes, such as aviation, rail, and trucking. Several federal agencies have a role in supporting, administering, and enforcing the Jones Act. In particular, MARAD’s mission is to promote the maintenance of an adequate, well balanced U.S. merchant marine to ensure that the United States maintains adequate shipbuilding and repair services, efficient ports, and a pool of merchant mariners for both domestic commerce and national defense. Although the Department of Defense (DOD) does not administer or enforce the Jones Act, the military strategy of the United States relies on the use of commercial U.S.-flag ships and crews and the availability of a shipyard industrial base to support national defense needs. As such, MARAD and DOD jointly manage the Voluntary Intermodal Sealift Agreement (VISA) Program, established for emergency preparedness, which includes over 300 commercial U.S.-flag vessels to provide DOD with assured access to emergency sealift capacity. See appendix II for more details on federal agencies’ roles in relation to the Jones Act. Jones Act requirements have resulted in a discrete shipping market between Puerto Rico and the United States. Most of the cargo shipped between the United States and Puerto Rico is carried by four Jones Act carriers that provide dedicated, scheduled, weekly service using containerships and container barges—some of which have exceeded their expected useful life. Dry and liquid bulk cargo vessels also operate in the market under the Jones Act, although some shippers report that qualified bulk cargo vessels may not always be available to meet their needs. Cargo moving between Puerto Rico and foreign destinations is carried by numerous foreign-flag vessels, typically as part of longer global trade routes. Freight rates in this market are determined by a number of factors, including the supply of vessels and consumer demand in the market, as well as costs that carriers face to operate, some of which are affected by Jones Act requirements. The average freight rates of the four major Jones Act carriers in this market were lower in 2010 than they were in 2006, as the recent recession has contributed to decreases in demand. In contrast, foreign-flag carriers operate under different rules, regulations, and supply and demand conditions and generally have lower costs to operate than Jones Act carriers. Shippers doing business in Puerto Rico reported that freight rates for foreign carriers going to and from foreign ports are often—although not always—lower than rates they pay to ship similar cargo from the United States, despite longer distances. However, data were not available to allow us to validate the examples given or verify the extent to which this occurred. According to these shippers, lower rates, as well as limited availability of qualified vessels in some cases can lead companies to source products from foreign countries rather than the United States. The impact of rates to ship between the United States and Puerto Rico on prices of goods in Puerto Rico is difficult to determine with any precision and likely varies by type of good. A large majority of the maritime trade between the United States and Puerto Rico is shipped in containers by four Jones Act carriers: Crowley Puerto Rico Services, Inc.; Horizon Lines, Inc., Sea Star Line, LLC; and Trailer Bridge, Inc. These carriers currently use 17 vessels to provide their shipping services—5 self-propelled containerships and 12 container barges that are pulled by tugboats (see table 1). As shown in the table, nearly all of the containerships and several of the barges used by these carriers are operating beyond their average expected useful life, which is about 30 years for a containership and about 27 years for a barge, according to Office of Management and Budget guidance. Containerships in this trade average 39 years old, while barges averaged 31 years, although one carrier noted that, despite their advanced age, all its Jones Act vessels operating in the trade are fully compliant with Coast Guard rules and regulations. Furthermore, these averages reflect when the vessels were first constructed, but do not account for periodic refurbishments of many of the vessels to mitigate some of the effects of age and wear on a vessel and extend the expected useful service life. While the Jones Act vessels operating between the United States and Puerto Rico are all enrolled in MARAD and DOD’s VISA program, these vessels would have limited contribution to military sealift capabilities, according to DOD officials. According to DOD, the containerships— particularly lift-on/lift-off vessels—in this trade are less useful for military purposes compared to vessels with roll-on/roll-off capability;tugs and barges in this trade are generally considered of lesser military value because of their slow speed relative to self-propelled vessels. Nonetheless, some of the vessels used for shipping between the United States and Puerto Rico have participated in past emergency responses, such as transporting goods to Haiti after the earthquake in 2010. In addition, according to DOD, whether or not the vessel is militarily useful, commercial U.S.-flag vessels provide employment to trained officers and unlicensed seamen, many of whom could be available to crew government-owned sealift vessels in times of war or national emergency. The four major Jones Act carriers provide regularly scheduled, weekly service between ports in the United States and Puerto Rico. These carriers offer different types of services based on the types of ships they operate. Horizon and Sea Star offer approximately 3-day one-way service between various U.S. ports and Puerto Rico on self-propelled containerships, while Trailer Bridge and Crowley provide somewhat slower barge service—approximately 7 days one way. Some of these vessels also serve ports in the Dominican Republic and the U.S. Virgin Islands (see fig. 1). Some carriers have tailored their service specifically for shipping between the United States and Puerto Rico. For example, while foreign-flag carriers involved in international trade use standardized 20- and 40-foot containers, some Jones Act carriers provide shippers with a range of larger container units (45-, 48-, and 53-foot). The carriers’ larger container units are the same size and type of equipment currently operated within the domestic U.S. trucking and rail transportation systems; thus, shippers can use the same packing systems they use for other modes of U.S. transportation, a benefit that provides cost savings to the carriers and shippers. This also enables more efficient loading and unloading of containers and trailers, and delivery to their final destination on the island. According to U.S. and Puerto Rico shippers we interviewed, the four carriers generally provide reliable, on-time service between the United States and Puerto Rico, allowing shippers to meet “just in time” delivery needs. In fact, many island importers’ inventory management relies on prompt and regular shipping and receipt of needed goods to stock shelves, instead of warehousing goods, a benefit that helps minimize inventory storage costs. In particular, we were told by stakeholders that warehousing is costly in Puerto Rico because of high energy costs and because the Puerto Rico government imposes inventory storage taxes on certain goods both imported into and manufactured in Puerto Rico. The remaining maritime trade between the United States and Puerto Rico is shipped on bulk vessels. Bulk cargo—including dry bulk goods such as fertilizer, animal feed, grains, and coal, and liquid bulk goods, such as oil and gas—are imported in large volumes and are sometimes seasonal. According to MARAD officials, global bulk services are typically based on unscheduled operations, as opposed to scheduled container services. According to shippers we interviewed, these vessels are often under term charters and a limited number of qualified Jones Act vessels may be available at any given time to meet shippers’ needs. While not encompassing all dry and liquid bulk vessels qualified to provide service between the United States and Puerto Rico, shippers that we interviewed identified three Jones Act carriers—utilizing a total of six vessels—that offer bulk-shipping services between the United States and Puerto Rico (see table 2). Some of the vessels are also used to serve ports in the U.S. Virgin Islands, the Dominican Republic, and Haiti. Numerous foreign carriers and foreign-flag vessels operate in Puerto Rico carrying cargo to and from foreign locations. According to data from the Puerto Rico Ports Authority, in April 2011 alone, 55 different foreign-flag cargo vessels—including tankers, containerships, and roll-on/roll-off cargo vessels, among others—loaded and unloaded cargo in the Port of San Juan, Puerto Rico. Over the entire year of 2011, 67 percent of the vessels that operated in the Port of San Juan were foreign-flag vessels, while 33 percent were U.S.-flag vessels. Some of the foreign carriers that serve Puerto Rico have extensive international operations—using vessels with larger capacity than the major Jones Act carriers—that stop at multiple ports along their shipping routes across the globe. Other foreign- flag carriers offer “feeder” services throughout the Caribbean from hubs in ports such as Kingston, Jamaica (see fig. 2). According to MARAD, vessels engaged in foreign trade are typically registered under “flag-of-convenience,” or open registries that have less stringent regulatory requirements than the U.S. flag registry. In 2011, most of the foreign-flag vessels calling in the Port of San Juan, Puerto Rico were registered under the Panamanian flag, followed by the Bahamian flag, the flag of Antigua and Barbuda, and the Liberian flag. Foreign carriers can also use vessels that are built anywhere in the world, and the average age of foreign-flag vessels (around 11-12 years) is significantly less than the average age of Jones Act vessels. Freight rates are set based on a host of supply and demand factors in the market, some of which are affected directly or indirectly by Jones Act requirements. However, because so many other factors besides the Jones Act affect rates, it is difficult to isolate the exact extent to which freight rates between the United States and Puerto Rico are affected by the Jones Act. The Puerto Rico trade, much like the maritime cargo trade around the world, has been affected by reduced demand overall because of the recession. Puerto Rico fell into a recession in 2006—before the onset of recession for the U.S. economy—and has had much more difficulty recovering from it, according to government sources. Moreover, the population of the island has been decreasing in the past decade. This lower demand relative to supply (i.e. vessel capacity) is a factor that would likely be putting downward pressure on freight rates in recent years, as carriers would have more difficulty selling their existing capacity. According to the data provided by the four major Jones Act carriers, average freight rates from the United States to Puerto Rico declined about 10 percent from 2006 through 2010, while rates from Puerto Rico to the United States declined about 17 percent. As demand decreases relative to supply, carriers will adjust their services in response.company reduced its service to Puerto Rico with one less barge and one In this market for example, according to Crowley, the less weekly sailing from Jacksonville in 2009, primarily in response to decreased demand. Also, more recently in July 2011, Sea Star discontinued its service from Philadelphia, Pennsylvania, because of a lack of demand. Some shippers and business representatives we spoke with were concerned with the possibility that, given the weak demand in the market, some carriers may not be able to sustain the level of services they currently provide in the Puerto Rico market. In certain specific markets, however, demand for Jones Act transportation between Puerto Rico and the United States may be increasing. For example, according to one shipper, there may be increased demand for shipping refined petroleum and gas products. For natural gas, this appears likely because the expected increased use of this fuel for electricity generation, while in the case of refined petroleum products this may be occurring because of a closure of the refinery on St. Croix, U.S. Virgin Islands that had previously provided petroleum products to Puerto Rico. However, several shippers in these markets told us that vessels are often not available to provide service. Where the supply of ships is limited relative to demand there will be upward pressure on freight rates. Typically in such a scenario, carriers and shipowners will respond to higher rates in the short term by repositioning existing capacity to serve that market, thus bringing supply and demand into balance. However, if qualified Jones Act vessels are not available, such adjustments may not occur since existing capacity operated by foreign-flag carriers cannot enter this market. Over the longer term, the market may adjust through new shipbuilding for the Jones Act trade, as long as expectations of demand and freight rates are sufficient to support that capital investment. Recent announcements from two Jones Act carriers concerning plans to build new containerships and tankers indicate that the U.S. flag industry is responding to the emergence of new market demand. Operating costs for carriers are another supply factor that contributes to the determination of freight rates. Most of the carriers’ operating costs (about 69 percent based on carrier data for 2011) are non-vessel operating costs, including such things as terminal and port costs, among others—and are not directly affected by Jones Act requirements, and would be similarly borne by any carrier operating between the United States and Puerto Rico. Vessel operating costs (which include crew costs, insurance, maintenance and repair, and fuel costs, among others) comprise about 31 percent of the carriers’ operating costs on average. Some vessel operating costs are affected by rules and regulations related to the Jones Act and operating under the U.S. flag. Most significantly, Jones Act carriers must hire predominantly U.S.-citizen crews, and according to data provided by the major Jones Act carriers, crew costs in this trade represented an average of about 20 percent of vessel operating costs in 2011. According to MARAD, the standard of living in the United States, labor agreements negotiated with mariner unions, benefits included in overall compensation, and government manning requirements, all affect crew costs. By contrast, foreign-flag carriers operating under an open registry have flexibility to hire crews from around the world, and can therefore avoid the higher costs associated with U.S.- crews. While not specific to the carriers or the vessels operating between the United States and Puerto Rico, according to a MARAD report, the combination of these various requirements and work rules can result in overall crewing costs for U.S. flag operators that are roughly 5 times greater than crewing costs for foreign-flag carriers, on average. In addition, U.S.-flag vessels are subject to government safety inspections and vessels have to comply with a variety of construction, safety, and environmental regulatory requirements, which affect their costs. According to the MARAD report, the lack of government safety inspections of foreign-flag vessels operating under open registries helps provide such vessels with increased operating flexibility and lower operating costs. According to Jones Act carriers and other stakeholders, some operating costs have been increasing. For example, fuel is one of the largest vessel operating cost for the Jones Act carriers in this market—representing an average of about 64 percent of the four major Jones Act carriers’ vessel operating costs in 2011—and fuel costs have increased substantially over the last ten years. While fuel costs are not directly affected by Jones Act requirements, older vessels burn fuel faster and less efficiently compared to newer vessels, and the age of some of the Jones Act carriers’ vessels has contributed to increasing fuel costs. However, MARAD noted that the majority of the Jones Act vessels are barges being towed by rebuilt tugboats at lower speeds than self-propelled containerships, which makes barges relatively fuel efficient compared to self-propelled vessels. Furthermore, older vessels require more maintenance and repair expenses than newer vessels. For the major carriers in the Puerto Rico market, this expense represented an average of about 4 percent of vessel operating costs in 2011. While the age of these vessels is not a direct result of the Jones Act, to some extent the U.S.-build requirement and the high costs of U.S. built vessels may delay recapitalization decisions, or render such decisions infeasible. Because foreign carriers can typically use vessels that are built anywhere in the world, rather than having to use generally more expensive U.S.-built vessels, they have more flexibility to recapitalize their fleets. As mentioned, on average, foreign-flag vessels are newer, and as such will generally benefit from lower overall fuel and ongoing maintenance costs. According to shippers and carriers, several other factors not directly related to Jones Act requirements in the Puerto Rico market contribute to how freight rates are set, including the following: For approximately 85 percent of the cargo moving between the United States and Puerto Rico, freight rates are set on a negotiated basis Although volume discounts are not unique to this under contract.market or the global maritime shipping industry, large volume shippers have more leverage to negotiate contracts with lower rates while small volume shippers or those that require infrequent service will likely pay higher rates. Based on our interviews with shippers, the negotiated rates vary substantially for shippers based on their companies’ size and regularity of use of shipping services. The short travel distance between the United States and Puerto Rico makes it possible for barge operators to compete with self-propelled containership operators. As we noted, barge service takes longer to transport goods than self-propelled containerships. However, barge vessels are less expensive to operate and maintain. As such, according to data provided by the four major Jones Act container carriers, freight rates for barge service from the United States to Puerto Rico are generally lower than rates for self-propelled containerships. For shippers with goods that are less time sensitive, barges offer a less expensive option for service between the United States and Puerto Rico. However, according to some shippers we interviewed, when they periodically require faster service or service from ports outside Florida there are fewer competitive alternatives, since only two carriers offer such service. Some of the cargo imported from the United States is temperature controlled perishable goods, such as dairy, meat, and agricultural products. According to representatives of the Puerto Rico Farm Bureau, the cost and reliability of shipping perishable food items is important because the island has less than a week’s supply of perishables at any given time. Some shippers reported paying substantially more for service using refrigerated containers, sometimes a few thousand dollars more per container, compared to a non-refrigerated container. Although higher prices for refrigerated cargoes are not unique to this market or the global maritime shipping industry, these and other representatives of an association for food importers perceived less competition for this particular market segment. According to the four major Jones Act carriers, typically, vessels are about 80 percent full for their total container capacity moving southbound from the United States to Puerto Rico, and only 20 percent full for total container capacity moving northbound from Puerto Rico to the United States. The lower demand on return legs of the routes (known as “backhaul”) results in relatively lower freight rates for this traffic. According to data provided by the four carriers, average freight rates for the return leg were about 55 percent less than the average rates from the United States to Puerto Rico in 2010. Some of the shippers we spoke with said low rates for the backhaul shipping services are beneficial to their business. Another factor that could have affected freight rates in the past was conduct by certain carriers that led to a Department of Justice antitrust investigation. The investigation found that some Jones Act carriers conspired to fix rates at least as early as May 2002 until at least April 2008. In addition, with respect to a class action lawsuit against various Jones Act carriers, in August 2011, the United States District Court for the District of Puerto Rico granted final approval of settlement agreements. The settlement terms give class action members the option of freezing the base rates—not including other charges or fees, such as fuel surcharges—of any shipping contract that exists with three of the Jones Act carrier defendants for a period of 2 years. Foreign carriers operate in a different market with different characteristics and, as mentioned, generally have lower vessel operating costs compared to Jones Act carriers. As with the Jones Act market, rates for shipments between Puerto Rico and foreign countries are determined by various supply and demand factors. For example, some foreign carriers’ longer trade routes allow them to spread their costs out over more containers or cargo and achieve economies of scale that are not available to Jones Act carriers providing dedicated service between the United States and Puerto Rico. In addition, while the recession has resulted in reduced demand in global shipping and put downward pressure on freight rates, because foreign carriers and shipowners operate in a global market, they may have more flexibility than Jones Act carriers to reposition vessel capacity in response to market- or product-specific fluctuations in demand. According to representatives of several shippers we spoke with, freight rates offered by foreign carriers are often lower than Jones Act carriers for shipping the same or similar goods from more distant foreign locations. Shippers provided a number of examples of specific rate differentials, but we were unable to validate these rate differentials or estimate an average differential because we could not obtain necessary data since most cargo move under negotiated contract rates that are confidential and foreign carriers were not responsive to our requests for information. Furthermore, we were unable to determine specifics of the services being provided for the rate examples we were given (e.g., delivery times, reliability of the service, etc.), and therefore, in some instances, the rate examples may not be comparable. Nonetheless, some companies operating in Puerto Rico told us that they may not purchase goods from U.S. sources because of higher transportation costs on Jones Act vessels compared to foreign-flag vessels. In some instances, they may instead purchase the same or a closely substitutable good from a foreign country. This was particularly evident in the bulk shipping market. For instance, according to representatives of the Puerto Rico Farm Bureau, the rate difference between Jones Act carriers and foreign carriers has led farmers and ranchers on the island to more often source animal feed and crop fertilizers from foreign sources than from U.S. domestic sources, even though commodity prices were stated to be similar. They provided an example that shipping feed from New Jersey by Jones Act carriers costs more per ton than shipping from Saint John, Canada, by a foreign carrier—even though Saint John is 500 miles further away. According to the representatives, this cost differential is significant enough that it has led to a shift in sourcing these goods from Canada. Other companies involved in food importing gave additional examples of corn and potatoes being sourced from foreign countries rather than the United States, which they attributed to the lower cost of foreign shipping. However, data was not available to verify the extent to which changes in sourcing occurs because of higher transportation costs on Jones Act vessels. Sourcing decisions in the market for petroleum products may also be affected by differences in freight rates between Jones Act vessels and foreign-flag vessels and the availability of qualified Jones Act vessels. An oil and gas importer in Puerto Rico told us that the company makes purchasing decisions based on the total price of oil or gas—including any applicable duties or other charges—plus transportation costs. The company looks at total prices from numerous suppliers around the world—including U.S. suppliers—but generally does not purchase from U.S. suppliers because the total cost is higher as a result of the differential in transportation costs. Representatives noted that the company does not purchase from U.S. suppliers in some case because of a lack of available Jones Act vessels to ship the product from U.S. ports. In another example, representatives of airlines purchasing jet fuel for use in Puerto Rico told us that they typically import fuel to the island from foreign countries, such as Venezuela, rather than from Gulf Coast refineries. They do so because of difficulty in finding available Jones Act vessels to transport jet fuel and, when vessels are available, the high cost of such shipments compared to shipping the product from foreign countries. These representatives noted that jet fuel availability in certain areas of the East Coast of the United States as well as in Puerto Rico was recently adversely affected by the closures of several refineries, including the one in St. Croix, U.S. Virgin Islands. The cost and availability of vessels can also affect future sourcing decisions. For example, the Puerto Rico Electric Power Authority (PREPA) is planning to transition its primary power generation fuel from oil to natural gas and expects its natural gas consumption to increase substantially in the future. PREPA currently purchases most of its natural gas from Trinidad and Tobago and transports it on foreign-flag vessels, but is developing plans to purchase more natural gas from U.S. suppliers beginning in 2014, because of the expected lower price of natural gas from the United States. To do so, Jones Act-qualified LNG tankers would need to be available. However, PREPA officials voiced concerns about the availability of eligible vessels, since none currently operates between the United States and Puerto Rico. They said the cost to build and operate a new LNG tanker under Jones Act requirements could result in high shipping costs that offset the savings from purchasing natural gas from the United States. Some foreign-flag LNG vessels are eligible to apply for an exemption under statute, but PREPA officials were concerned that these vessels may not be available because they are currently under long-term contracts. Furthermore, because many of these vessels may be 16 years old or older, officials were concerned that they may not be as efficient or have the same level of safety that newer vessels may have. We examined trade data for various commodities mentioned by shippers to see the extent to which these goods are sourced from other countries. Some commodities showed high percentages of foreign sourcing, while others were either split more evenly or mostly sourced domestically. It is difficult to discern the effect of any one factor, such as freight rates, on the sourcing of imports, because many factors can affect a business’s sourcing decision at any given time, including the availability of ships and the price of the goods. In any case, to the extent that the lack of available vessels may be causing shippers to seek foreign sources for some products, this lack of availability may signal the need for new Jones Act vessels to enter this trade. However, if carriers do not believe that the rates they will be able to charge in the future would be sufficient to support such investments, new vessels might not enter the trade and the products may continue to be sourced from non-U.S. sources. Recent announcements from two Jones Act carriers concerning plans to build new vessels indicate the willingness of the U.S. flag industry to respond to market demand. The prices of goods sold in Puerto Rico are determined by a host of supply and demand factors, similar to freight rates, and therefore, the impact of any costs to ship between the United States and Puerto Rico on the average prices of goods in Puerto Rico is difficult, if not impossible, to On the demand side, key factors include the determine with precision.state of the economy and associated level of income of consumers, the tastes of potential consumers for various goods, and the extent to which consumers have ready substitutes (of other goods or the same good from elsewhere) available to meet their needs. For example, if consumers have ready substitutes available to them, it may be more difficult for retailers to pass on transportation costs in prices. On the supply side, a host of cost factors is also important, transportation costs among them. Some shippers we interviewed told us that transportation costs to Puerto Rico from the United States represent a minimal portion of the costs of goods they sell in Puerto Rico, while other shippers stated that these costs were more significant. These differences in the impact of transportation costs appear to vary depending on the nature of the shipper, and the shipping requirements of the goods. In particular, we were told that prices for some goods that require fast delivery or refrigerated containers—particularly food products subject to spoilage— may be more affected by transportation costs, because transportation costs represent a higher proportion of the total cost of the goods. We were also told that other cost factors that may influence pricing are somewhat unique to Puerto Rico. Some shippers noted that doing business on the island is expensive relative to costs for similar businesses in the United States. In particular, some shippers stated that storage and distribution in Puerto Rico can be more costly than in the United States and are factors in the prices at which goods sell. Some shippers told us that their decisions on pricing are influenced by the extent of competition in Puerto Rico for the goods they provide. For example, according to a major U.S. company doing business in Puerto Rico, its pricing strategy is dependent on the pricing of the local competitors on the island. Company representatives explained that their prices may or may not be similar in Puerto Rico compared to U.S. mainland stores, but that those prices are not driven by shipping costs. Further, for some larger chain stores, pricing decisions are made at a corporate level so that prices for goods often do not differ considerably from location to location, despite variances in transportation costs. For example, according to a major U.S. chain store operating in Puerto Rico, its merchants often want to be able to offer a consistent every day price in its stores. Thus, the company decides, in some cases, to price some goods in Puerto Rico the same as in U.S. stores at potentially reduced profitability for those goods sold in Puerto Rico. Many of the shippers and other stakeholders we interviewed expressed the view that allowing foreign carriers to enter this trade would create a more competitive marketplace with lower freight rates, which could in turn, affect shippers’ business decisions and product prices. For example, shippers told us that lower freight rates between the United States and Puerto Rico could result in shippers choosing to source more goods from the United States as opposed to foreign countries, and that lower rates could lead to lower prices for products sold to consumers in Puerto Rico. We were also told that a broader array of providers available in the international market would help to ensure that specific services and vessels are always available to meet shippers’ needs. However, the effect on competition and freight rates from allowing foreign carriers to enter this trade is uncertain and depends on a variety of factors. Foreign carriers operating in the U.S. coastwise trade could be required to comply with other U.S. laws and regulations, even if Puerto Rico were exempted from the Jones Act, which could increase foreign carriers’ costs and may affect the rates they could charge. We reported in 1998 and continue to find that arriving at an accurate estimate of the costs to foreign carriers of complying with U.S. laws would be very difficult, in part, because the estimate would depend heavily on which laws are considered applicable and on how they are applied. Federal agency stakeholders we talked with generally indicated that they were reluctant to speculate on the extent to which U.S. laws might be applicable to such foreign carriers in the absence of Jones Act requirements. However, we reported in 1998 that, in particular, additional taxes and labor costs might be incurred. Some stakeholders contend, albeit speculatively, that if these costs were estimated and included, any rate advantage foreign carriers may have over Jones Act carriers would be lessened. For example, income generated by foreign corporations operating foreign-flagged vessels in the domestic trade could be subject depending on the circumstances. In addition, if foreign- to U.S. taxation,flagged vessels were to spend most of their time in U.S. waters—as they might if they were to provide dedicated service between the United States and Puerto Rico—it would be necessary to obtain for any foreign crewmembers an immigration status that permits them to engage in employment in the United States, requirements that could increase costs. Regardless of the legal questions above, entry by foreign carriers could have a number of other consequences. Although complying with U.S. laws could lessen any cost advantage to foreign carriers, current Jones Act carriers could still be operating at a cost disadvantage. Economic theory would suggest that entry into a market by lower-cost providers would likely alter the market dynamics in a way that higher-cost producers may have difficulty continuing to compete in the market. To the extent that foreign carriers can use cost advantages to charge lower rates and take market share from the existing carriers, such entry could lead to lost service by Jones Act carriers, their exit from the market, or consolidation among carriers serving the market. Current Jones Act carriers might also opt to provide service under a foreign flag to avoid costs associated with the U.S. flag. According to MARAD officials, unrestricted competition with foreign-flag operators in the Puerto Rico trade would almost certainly lead to the disappearance of most U.S.-flag vessels in this trade. MARAD officials noted that U.S. carriers currently do not typically compete with foreign-flag carriers in other Caribbean markets under the U.S. flag. Where U.S. carriers do compete with foreign-flag carriers, they typically operate non-U.S.-flag vessels, suggesting that U.S.-flag vessels may not be able to successfully compete against foreign-flag vessels if Jones Act restrictions were lifted for Puerto Rico. To the extent that the number of carriers operating under the U.S. flag decreases under this scenario, expectations for future orders for new vessels built in U.S. shipyards could be reduced or eliminated—which is discussed in more detail later in this report—and the number of U.S. mariners could likewise decrease. According to MARAD, up to 1,400 mariners were crewed full-time on Jones Act vessels in Puerto Rico in 2011, including on offshore service vessels, harbor tugs, ferries, and barge services in addition to the vessels we identified earlier (see tables 1 and 2). A decline in the number of U.S.-flag vessels would result in the loss of jobs that employ skilled mariners needed to crew the U.S. military reserve and other deep-sea vessels in times of emergency. Furthermore, according to MARAD, the loss of U.S.-flag service would reduce their ability to ensure that marine transportation serves the Puerto Rico economy. The nature of the service provided between Puerto Rico and the United States could also be affected by a full exemption from the Jones Act. In particular, foreign carriers that currently serve Puerto Rico as part of a multiple-stop trade route would likely continue this model to accommodate other shipping routes to and from other Caribbean destinations or world markets rather than provide dedicated service between the United States and Puerto Rico, as the current Jones Act carriers provide. If this were to occur, some stakeholders expressed concerns about the effect that such an altered shipping service would have on the reliability of service to and from the United States. For example, longer multi-port trade routes make it difficult to ensure that scheduled service will be consistently reliable, because carriers are more likely to experience weather delays or delays at ports, and could even intentionally bypass ports on occasion to make up lost travel time. According to some shippers, reduced reliability of service could result in shippers needing to keep larger inventories of products, and could thus increase warehousing and inventory-related costs for companies in Puerto Rico. As we described previously, importers’ inventory management relies on prompt and regular shipping and receipt of needed goods to stock shelves, which is less costly than warehousing goods on the island. Additionally, some stakeholders expressed concern about the possible loss of convenient and inexpensive backhaul service. If, under new market conditions, carriers choose not to provide dedicated service, then backhaul services from Puerto Rico to the United States would also be part of longer multi-port trade routes and may not be direct from Puerto Rico to the United States. Because of limited volumes in this market, the result could be sporadic service or higher rates. Rather than allowing foreign carriers to provide service between the United States and Puerto Rico, a different modification advocated by some stakeholders would be to allow vessels engaged in trade between the United States and Puerto Rico to be eligible for an exemption from the U.S.-build requirement of the Jones Act. This would allow U.S.-flag carriers to purchase or use foreign-built vessels for shipping between the United States and Puerto Rico. According to industry stakeholders we interviewed, foreign-built barges can be priced about 20 percent less than U.S.-built barges, and foreign-built containerships can be priced 50 percent less than similar U.S.-built containerships. According to proponents of this change, the availability of lower cost vessels could encourage existing carriers to recapitalize their aging fleets. As previously mentioned, many of the Jones Act vessels in this trade are operating beyond the end of their expected useful life, and according to some stakeholders, the high cost of building new U.S. vessels, as well as decreased demand in the market, may result in carriers deferring recapitalization decisions. Proponents also point out that newer, more efficient vessels generally have lower operating costs than vessels currently operating in the trade and thus may reduce operating costs for carriers. In addition, according to proponents, the availability of lower cost vessels would encourage additional competition, particularly in those sectors where demand may be increasing and available vessels are lacking, such as in bulk cargo shipping. Regardless of whether vessels are U.S.-built or foreign-built, the costs of any new vessels will need to be recouped over the life of the vessel through freight rates. Should carriers decide to move forward with recapitalizing their fleets, they will need to decide if expected freight rates over many years are sufficient to support the purchase of new vessels. The vessels currently involved in the trade, because they have largely been paid for and depreciated, have negligible ongoing capital costs. Purchasing new vessels will result in higher ongoing capital costs for carriers, although these higher capital costs will be offset to some extent by reduced fuel, and vessel maintenance and repair costs. Given the current economic conditions in Puerto Rico and decreases in overall demand, it could be challenging for some carriers to invest in new vessels. The higher cost of U.S.-built vessels relative to foreign-built vessels—particularly containerships—exacerbates that challenge. However, one carrier recently placed an order for two new U.S.-built vessels for the Puerto Rico trade and another Jones Act carrier recently purchased two new tankers for use in the Gulf of Mexico, indicating that— despite the poor economic conditions currently—the higher cost of U.S.- built vessels is not a barrier in their case. Nonetheless, allowing carriers to purchase or charter new or existing foreign-built vessels would presumably reduce the expense of recapitalizing the fleet, and make it more likely that carriers would choose to invest in newer vessels because they will be able to recoup that investment. Foreign shipyards can build vessels for less than U.S. shipyards for several reasons. For example, foreign shipyards—particularly large yards in China, Japan, and South Korea—enjoy considerable economies of scale because of long production runs of relatively standard vessel designs. Long production runs reduce labor costs per unit, as workers become more efficient because they repeat their job frequently due to the high volume of vessels being built, and support a strong industrial base of parts and material suppliers. U.S. shipyards typically build customized vessels, according to customer design specifications, which might only be used to build one or a few vessels. Specifically, for self-propelled vessels such as containerships, which are manufactured in small volumes in the United States, U.S. shipyards often cannot take advantage of the efficiencies of scale afforded by large-series production and common design orders. According to one shipyard we interviewed, when they do have longer production runs, U.S. shipyards—like foreign shipyards—are able to develop efficiencies of scale and reduce costs. Some foreign shipyards also tend to be more operationally and cost efficient with the production steps of building a vessel and the amount of labor associated with those steps, according to representatives from one U.S. shipyard where we interviewed. However, because some U.S. shipyards are subsidiaries of, or partners with foreign shipyards, many of these types of efficient production processes—such as streamlined workflow and sequencing, and consistent workforce collaboration—are being adopted in these U.S. shipyards. Other factors such as lower wages in foreign shipyards and a variety of construction, safety, and environmental regulatory standards that exist in U.S. shipyards—such as required shipyard safety measures when using certain paints such as those containing lead—can also reduce costs for foreign shipyards compared to U.S. shipyards. Because of these price differentials, eliminating the U.S.-build requirement and allowing Jones Act carriers to deploy foreign-built vessels to serve Puerto Rico could reduce or eliminate U.S. shipyards’ expectations for future orders from this market and could have serious implications for the recent order for two U.S.-built ships for this market from one of the Jones Act carriers. According to MARAD and DOD officials, and representatives of U.S. shipyards, orders for commercial vessels have become significantly more important to retaining the industrial shipbuilding base because military and other non-commercial vessel orders have declined. Although the number of vessels that could likely be replaced is small, it would equate to a substantial order for U.S. shipbuilders that could help sustain their operations, as well as help them to retain a skilled workforce and supplier base. Absent new orders, that workforce could be put at risk. Shipyards and other supporters of the Jones Act also raise concerns that allowing an exemption to Puerto Rico would open the possibilities of allowing an exemption for all noncontiguous markets subject to the Jones Act, such as Hawaii and Alaska, as well as coastal markets, a situation that could result in more significant effects on shipyards and the shipyard industrial base needed by DOD. According to DOD officials, to the extent that Jones Act markets are unable to sustain a viable reserve fleet, DOD would have to incur substantial additional costs to maintain and recapitalize a reserve fleet of its own. The Jones Act was enacted nearly a century ago to help promote a viable maritime and shipbuilding industry that would, among other things, provide transportation for the nation’s maritime commerce and be available to serve the nation in times of war and national emergency. The possible effects of the Act on Puerto Rico as well as U.S. businesses are manyfold. The Act may result in higher freight rates—particularly for certain goods—than would be the case if service by foreign carriers were allowed. Nevertheless, at the same time, the law has helped to ensure reliable, regular service between the United States and Puerto Rico— service that is important to the Puerto Rican economy. Because of freight rate differentials or the lack of availability of Jones Act vessels for certain products, the Act may cause businesses in Puerto Rico to import goods from foreign locations when the same goods are readily available from U.S. providers. However, it is not possible to measure the extent to which rates in this trade are higher than they otherwise would be because the extent to which rules and regulations that would apply to international carriers’ vessels that may serve this trade are not known, and so many factors influence freight rates and product prices that the independent effect and associated economic costs of the Jones Act cannot be determined. Finally, the original goal of the Act remains important to military preparedness and to the shipbuilding and maritime industries, but understanding the full extent and distribution of the costs that underlie these benefits is elusive. This circumstance results in a question as to whether the status quo presents the most cost effective way to achieve the goals expressed in the Jones Act. Ultimately, addressing these issues would require policymakers to balance complex policy trade-offs with the recognition that precise, verifiable estimates of the effects of the Act, or its modification, are not available. We provided a draft of this report to the departments of Commerce, Defense, Homeland Security, Justice, and Transportation for review and comment. Commerce, Defense, and Justice had no comments. Homeland Security and DOT provided technical clarifications, which we incorporated, as appropriate. DOT also generally agreed with the information presented in the report, but noted that many of the issues related to the Jones Act are both complex and multifaceted. In particular, DOT noted that while the report highlights issues that could affect the number of new vessels added to the Jones Act trade, carriers have recently purchased or announced plans to purchase new U.S.-built ships for the petroleum and container trades. DOT further noted that consideration of a ship’s age, cost, efficiency, and their effect on the Jones Act trade is influenced by numerous factors such as the types of ships involved, their condition, and the way in which they are maintained and operated. In addition, to verify information, we sent relevant sections of the draft report to various shippers and stakeholders, the Shipbuilders Council of America, and the four major Jones Act carriers, which also provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees and members; the Secretary of Commerce; the Secretary of Defense; the Secretary of Homeland Security; the U.S. Attorney General; the Secretary of Transportation; the Chairman of the Surface Transportation Board; the Chairman of the Federal Maritime Commission; the Director, Office of Management and Budget; and others. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or by e-mail at stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To address the two objectives, we reviewed relevant literature related to maritime shipping between the United States and Puerto Rico, and Puerto Rico and other foreign locations based on search results from databases, such as ProQuest®, as well as trade publications, industry stakeholder groups, and the Internet. We also reviewed and synthesized published reports from government sources that discussed and analyzed effects of the Jones Act, including reports from GAO, U.S. International Trade Commission, Maritime Administration (MARAD), Customs and Border Protection (CBP), Congressional Research Service, Congressional Budget Office, U.S. Department of Energy, and Federal Reserve Bank. We also reviewed literature that described the nature and economics associated with global shipping markets. Furthermore, we synthesized information on the legal framework that governs U.S. domestic cargo shipping between U.S. and Puerto Rico and other domestic noncontiguous markets. This synthesis included information on the Jones Act, its requirements and pertinent legislative history, and other related laws and regulations. We also reviewed federal agency documentation of CBP and the Coast Guard responsible for enforcing and administering Jones Act provisions, U.S. vessel documentation laws and requirements, and the process for granting administrative waivers for Jones Act requirements. We collected and analyzed data relevant to these markets and gathered the perspectives and experiences of numerous public and private sector stakeholders through interviews and written responses. We gathered information from the four major Jones Act carriers—Crowley Maritime Corporation; Horizon Lines, Inc.; Sea Star Line; and Trailer Bridge, Inc.— and Moran Towing Corporation about their business operations in providing shipping services between the United States and Puerto Rico, including information about the vessels used, the ports served, the routes operated, the frequency of service, and rates charged for shipping. We analyzed information on capital and operating costs for the four major carriers to understand how aspects of the Jones Act impact their costs of doing business. We interviewed representatives of these companies with respect to the economics of the market, differences between their services and services provided by foreign carriers, and implications associated with certain potential changes to the Jones Act. Nine of the ten foreign carriers we contacted declined to be interviewed, although representatives from two foreign carriers participated in a larger meeting of stakeholders held in Puerto Rico. As a result, we were not able to gather detailed cost or rate information from foreign carriers that make port calls in Puerto Rico. We interviewed numerous U.S. industry associations, and a selection of companies in the United States and Puerto Rico that purchase shipping services from Jones Act and foreign carriers, to obtain a range of different perspectives on these shipping markets, the impacts of those markets on their operations, and to understand different perspectives on the implications associated with changes to the Jones Act. We interviewed representatives of the American Maritime Partnership, American Maritime Congress, and Chamber of Shipping of America. We interviewed representatives of 10 U.S. and 6 Puerto Rico companies that ship products between the United States and Puerto Rico that included a range of major business areas, such as pharmaceutical, biotechnology, personal and household consumer products, food and beverage products, and large retail industries. We obtained information and discussed their perspectives on the nature of the maritime trade markets in Puerto Rico and the Caribbean Basin, the reliability of shipping service, volume and products being shipped, how they determine product prices and how shipping costs may or may not affect those prices, and how the Jones Act may affect these markets. We selected the U.S. companies within the major business areas by assembling a list from Internet searches and from a customer list provided by one Jones Act carrier that purchases shipping services in the Puerto Rico trade. We divided the list into five industry categories and randomly selected six in each category for a total of 30 companies to contact. We conducted semistructured telephone interviews with the 10 companies that agreed to talk to us. We selected the Puerto Rico companies by requesting representatives of six of the Puerto Rico trade associations we met with while visiting Puerto Rico to provide a diverse list of about 20 businesses based on their unique knowledge of their members and those they considered generally representative of the different business sectors within their association’s membership base. We requested that the list included a size range of large, medium, and small companies in terms of the number of monthly shipments imported or exported. We received a list of 20 companies from three of the six associations. In consultation with a GAO design methodologist, we randomly selected 15 companies, five within each list, to contact. We conducted semistructured telephone interviews with the 6 Puerto Rico companies that agreed to talk to us. Because we selected a nonprobability sample of the companies to interview, the information we obtained from these interviews cannot be generalized to all U.S. and Puerto Rico companies (shippers) that purchase shipping services from Jones Act carriers between the United States and Puerto Rico. We also interviewed representatives from five shipyards in the United States to understand their capabilities to build vessels for the Puerto Rico trade, how the Jones Act affects their operations, and differences in costs associated with shipbuilding in the United States and shipyards abroad. We selected the shipyards based on size of operations, type of vessels built, and recommendations from the representatives of the Shipbuilders Council of America. They included Bay Shipbuilding Co., Gladding-Hearn Shipbuilding, Kvichak Marine Industries, National Steel and Shipbuilding Company (NASSCO), and VT Halter Marine shipyards. We also visited the NASSCO shipyard in San Diego, California, to meet with representatives. Furthermore, we interviewed representatives from General Dynamics’ American Overseas Marine to discuss the market and availability of LNG tankers for transporting LNG cargo from the United States to Puerto Rico currently and in the future. Because we selected these shipyards as part of a nonprobability sample, our findings cannot be generalized to all U.S. shipyards. We also visited Puerto Rico to meet with a range of stakeholders to obtain information and perspectives on the range of views regarding how the Jones Act affects Puerto Rico, the shipping market, and the broader economy. We met with government officials from CBP responsible for San Juan and Ponce entry ports, Government Development Bank, Puerto Rico Electric Power Authority, Department of Economic Development and Commerce, Puerto Rico Port Authority, the City of Ponce (along with officials associated with the former Port of the Americas Authority), as well as economists in Puerto Rico who have analyzed the Jones Act in relation to Puerto Rico’s economy, to understand their perspectives on these issues. We also met with representatives of nine trade associations: the Puerto Rico Shipping Association, the Puerto Rico Manufacturers Association, the Puerto Rico Chamber of Commerce, the Puerto Rico Pharmaceutical Industry Association, the Puerto Rico Products Association, the Puerto Rico Chamber of Food Marketing, Industry & Distribution, the Puerto Rico Farm Bureau, the Puerto Rico United Retailers Association, and the Gasoline Retailers Association. Because we selected various stakeholders as part of a nonprobability sample, our findings cannot be generalized to all Puerto Rico stakeholders. We collected data and information and discussed the Puerto Rico market and implications of changes to the Jones Act with officials from MARAD and several other federal government agencies. For example, we discussed the process for documenting Jones Act vessels with the U.S. Coast Guard; how tax laws may apply given changes to the act with the Internal Revenue Service; and information about federal antitrust actions taken in connection with an ongoing investigation, by the Department of Justice, of price fixing in the shipping market between the United States We collected data on waterborne commerce between and Puerto Rico.the United States and Puerto Rico, and between Puerto Rico and the rest of the world, from the U.S. Census Bureau. We reviewed related documentation and interviewed knowledgeable agency officials about the data and determined the data to be sufficiently reliable for our reporting purposes. We discussed the process for granting waivers to the Jones Act with Department of Homeland Security (DHS) and CBP officials, and discussed administration and enforcement of the Jones Act and implications of changes to the act with CBP officials in Puerto Rico. We interviewed officials from the Department of Defense (DOD) to understand how the Jones Act supports its strategic and mission objectives, and to understand the agency’s perspectives on the implications of making changes to the Jones Act specifically with respect to Puerto Rico and more broadly. Undertaking an analysis to measure the economic impact of the Jones Act on Puerto Rico requires a credible estimate of the differences in freight rates between Jones Act carriers and prospective international carriers that could serve this market. We did not attempt to develop a model to provide such estimates because the necessary data on routes, If carriers, vessels, shippers, cargo, and rates, were not available to us.we had been able to obtain all the necessary data, we could have conducted an analysis that would attempt to reveal whether and to what extent freight rates are higher on Jones Act routes to Puerto Rico compared to similar service in the international shipping market. We would have also been able to hold constant other key factors that would influence rates such as distance travelled, size and age of vessel, and characteristics of shippers and cargo. However, a further step in this analysis would require a series of assumptions about the extent to which U.S. laws would be applicable to foreign carriers providing service between the United States and Puerto Rico. These assumptions would allow us to better gauge whether foreign carriers entering this trade would have higher costs than they currently do in providing their international services. Federal stakeholders we talked with indicated that they were, in general, reluctant to speculate on the extent to which U.S. laws might be applicable to such foreign carriers in the absence of Jones Act requirements. Ultimately, even if the necessary data for these analyses were available and even if we could develop alternative scenarios about how international carriers’ costs might be affected by the application of U.S. law, it would still remain uncertain how those costs would be manifested in freight rates. Finally, there are also many uncertainties about how any change in freight rates would affect the Puerto Rico economy—and in particular how they would affect product prices—under varied circumstances. We conducted this performance audit from October 2011 through February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Maritime Administration’s (MARAD) mission is to promote the maintenance of an adequate, well-balanced U.S. merchant marine to ensure that the United States maintains adequate shipbuilding and repair services, efficient ports, and a pool of merchant marines for both domestic commerce and national defense. In support of that mission, MARAD administers (1) the Federal Ship Financing Program that guarantees private loans to commercial shipowners and shipyards for ship and shipyard building and modernization, (2) the Small Shipyards Grant Program that funds capital and related improvements for qualified small shipyard facilities, (3) the Capital Construction Fund Program that assists owners and operators of U.S.-flag vessels to help modernize and expand the U.S. merchant marine through construction, reconstruction, or acquisition of vessels, and (4) the Construction Reserve Fund that provides financial assistance as tax deferral benefits to eligible U.S.-flag operators whereby gains attributable to the sale or loss of a vessel may be deferred as long as the proceeds are used to expand or modernize the U.S. merchant fleet. Within the DHS, the U.S. Coast Guard is responsible for administering and enforcing documentation requirements for U.S.-flag registry (e.g., determining whether vessels meet U.S.-ownership and build requirements), and CBP is responsible for enforcing and administering laws and regulations pertaining to the coastwise trade, including the Jones Act. The Surface Transportation Board (STB) has regulatory oversight of certain domestic shipping-freight rates, including noncontiguous ocean shipping freight rate matters, and Jones Act carriers are required to file tariff rates with STB as well as terms and conditions of contracts they execute with shippers. Foreign maritime carriers operating in the United States come under the jurisdiction of the Federal Maritime Commission (FMC), which exercises regulatory oversight of foreign trade, and requires common carriers involved in foreign-U.S. trade to file tariffs and service agreements. Section 7 of the Shipping Act of 1984, as amended, exempts agreements between foreign common carriers from U.S. antitrust law so long as the carriers file with FMC, and allows foreign carriers to discuss and set rates and service terms and conditions. In general, with respect to navigation and vessel inspection laws, such as the Jones Act, statutorily authorized administrative waivers may occur in the interest of national defense. More specifically, such waivers are to occur upon request of the Secretary of Defense whereby the head of the agency responsible for the administration of the particular navigation or inspection laws at issue is required by statute to waive compliance with those laws to the extent the Secretary of Defense considers necessary in the interest of national defense. occur where the head of the agency responsible for the administration of such navigation or vessel inspection laws, (i.e., DHS), considers it necessary in the interest of national defense to waive such compliance, following a determination by the Maritime Administrator on the non- availability of qualified U.S.-flag capacity to meet national defense requirements. In November 2012, for example, following the effects of Hurricane Sandy, the Secretary of Homeland Security issued a temporary waiver of the Jones Act to allow non-Jones Act oil tankers to transport oil from U.S. ports in the Gulf of Mexico to Northeastern ports to provide additional fuel resources to the region. This waiver provided, in part, that the lost production, refining, and transportation capacity had resulted in the imminent unavailability of petroleum products, including gasoline, and threatened the nation’s economic and national security. See, 46 U.S.C. § 501(a). liquefied gas tankers, under certain specified conditions. Also, such legislation has been enacted specifically in relation to the Puerto Rico trade. The most recent legislation specific to Puerto Rico was enacted in 2006 to authorize DHS, through the Coast Guard, to issue a coastwise endorsement to allow, for example, foreign-built liquefied gas tankers built before 1996 to transport LNG or liquefied petroleum gas to Puerto Rico from other ports in the United States. Although DOD does not administer or enforce the Jones Act, the military strategy of the United States relies on the use of commercial U.S.-flag ships and crews and the availability of a shipyard industrial base to support national defense needs. MARAD and DOD jointly manage the VISA program, which was established for emergency preparedness and which includes over 300 commercial U.S.-flag vessels to provide DOD assured access to emergency sealift capacity that complements its sealift capabilities in transition to wartime operations. specific requirements, such as speed capability, cargo capacity, and capability of carrying specialized equipment and supplies without significant modification. Whether or not the vessel is militarily useful, commercial U.S.-flag vessels provide employment to trained officers and unlicensed seamen, many of whom could be available to crew government-owned sealift vessels in times of war or national emergency. Having such vessels and crews available in times of emergency is beneficial to DOD and limits its need for procuring and maintaining comparable vessels in the government-owned fleet of cargo vessels, which could constitute a significant additional cost to the agency. In addition to the VISA program, other programs exist to ensure sealift capability using a mix of government and commercial vessels. MARAD operates the Ready Reserve Force, consisting of a fleet of 46 government-owned cargo vessels, which is activated only upon the request of the DOD and supports the transport of unit and combat support equipment during the initial military mobilization period before commercial vessels can be marshaled. MARAD also administers the Maritime Security Program which enrolls 60 modern, militarily-useful, U.S.-flag commercial ships—operating in the international trades—where owners receive a fixed retainer payment in exchange for providing DOD with access to their vessels during times of war, national emergency, or when deemed necessary by the Secretary of Defense. shipyard industrial base to service and repair military vessels, and build new vessels to replace or expand the military fleet. Seven major shipyards currently construct the vast majority of military vessels, and some of these also construct a small number of commercial vessels, and according to industry representatives, are generally capable of building larger oceangoing vessels such as those used in the Puerto Rico trade and other noncontiguous and coastwise trades. About 280 medium and small commercial U.S. shipyards are engaged in repairing government ships and producing the large majority of smaller commercial vessels such as tugboats, barges, and service boats engaged in Jones Act trade. Some of the larger yards are also capable of building large oceangoing vessels, according to the Shipbuilders Council of America and a shipyard we interviewed. According to DOD, these shipyards play an important role in sustaining industries that support shipbuilding. Overall, the number of oceangoing commercial vessels produced in the United States is low in comparison to the production from foreign shipyards, which typically specialize in building certain types of large containerships, tankers, LNG carriers, or bulk carriers. Most large, commercial cargo vessels that supply the world shipping industry are being built in China, Japan, and the Republic of Korea, as discussed earlier. In an effort to address these declines, the U.S. Navy partnered with MARAD in November 2011, through memorandum of agreement, for supporting the objectives relating to the American Marine Highway Program, particularly in the development, design, construction, and operation of U.S- built and U.S.-crewed dual-use vessels that can serve in peacetime in the Jones Act trade and also provide sealift capability for DOD in time of national emergency. The purpose of the American Marine Highway Program is to expand the use of the inland and coastal waterways for transporting cargo to reduce congestion in other transportation modes, thus expanding the domestic waterborne- transportation markets that would be served by Jones Act vessels. The program is expected to help generate commercial work for U.S. shipyards and jobs for U.S. mariners. In support of the American Marine Highway program, the National Defense Authorization Act for Fiscal Year 2010 required the establishment and implementation of the Marine Highway Grants program, and $7 million in funds was congressionally directed to the new grants program in committee reports accompanying the Consolidated Appropriations Act, 2010. Grants under the Marine Highway Grants program could extend to the purchase or lease of equipment used at port terminals and facilities, and construction or modification of vessels to increase energy efficiency and meet environmental standards. According to the Navy, the American Marine Highway Program and dual-use vessel concept is likely to be the most cost-effective means of addressing future recapitalization of the government-owned and commercial vessels on which they rely. Many of the vessels in the Ready Reserve Force are nearing the end of their practical service life and must be replaced by newer ships. The estimated cost for the recapitalization for the entire Ready Reserve Force is in the billions of dollars. In addition to the contact named above, the following individuals made important contributions to this report, Andrew Von Ah, Assistant Director; Amy Abramowitz; Ken Bombara; Stephen L. Caldwell; Vashun Cole; Laura Erion; Emil Friberg; Geoffrey Hamilton; Sarah Jones; Hannah Laufe; Thanh Lu; Joshua Ormond; Amy Rosewarne; and Shana Wallace.
Puerto Rico is subject to Section 27 of the Merchant Marine Act of 1920, known as the "Jones Act" (Act), which requires that maritime transport of cargo between points in the United States be carried by vessels that are (1) owned by U.S. citizens and registered in the United States, (2) built in the United States, and (3) operated with predominantly U.S.-citizen crews. The general purposes of the Jones Act include providing the nation with a strong merchant marine that can provide transportation for the nation's maritime commerce, serve in time of war or national emergency, and support an adequate shipyard industrial base. Companies (shippers) that use Jones Act carriers for shipping in the Puerto Rico trade have expressed concerns that, as a result of the Jones Act, freight rates between the United States and Puerto Rico are higher than they otherwise would be, and given the reliance on waterborne transportation have an adverse economic impact on Puerto Rico. This report examines (1) maritime transportation to and from Puerto Rico and how the Jones Act affects that trade and (2) possible effects of modifying the application of the Jones Act in Puerto Rico. GAO collected and analyzed information and literature relevant to the market and gathered the views of numerous public and private sector stakeholders through interviews and written responses. GAO is not making recommendations in this report. The Department of Transportation (DOT) generally agreed with the report, but emphasized that many of the issues related to the Jones Act are complex and multifaceted. DOT and others also provided technical clarifications, which GAO incorporated, as appropriate. Jones Act requirements have resulted in a discrete shipping market between Puerto Rico and the United States. Most of the cargo shipped between the United States and Puerto Rico is carried by four Jones Act carriers that provide dedicated, scheduled weekly service using containerships and container barges. Although some vessels are operating beyond their expected useful service life, many have been reconstructed or refurbished. Jones Act dry and liquid bulkcargo vessels also operate in the market, although some shippers report that qualified bulk-cargo vessels may not always be available to meet their needs. Cargo moving between Puerto Rico and foreign destinations is carried by numerous foreign-flag vessels, often with greater capacity, and typically as part of longer global trade routes. Freight rates are determined by a number of factors, including the supply of vessels and consumer demand in the market, as well as costs that carriers face to operate, some of which (e.g., crew costs) are affected by Jones Act requirements. The average freight rates of the four major Jones Act carriers in this market were lower in 2010 than they were in 2006, which was the onset of the recent recession in Puerto Rico that has contributed to decreases in demand. Foreign-flag carriers serving Puerto Rico from foreign ports operate under different rules, regulations, and supply and demand conditions and generally have lower costs to operate than Jones Act carriers have. Shippers doing business in Puerto Rico that GAO contacted reported that the freight rates are often--although not always--lower for foreign carriers going to and from Puerto Rico and foreign locations than the rates shippers pay to ship similar cargo to and from the United States, despite longer distances. However, data were not available to allow us to validate the examples given or verify the extent to which this difference occurred. According to these shippers, lower rates, as well as the limited availability of qualified vessels in some cases, can lead companies to source products from foreign countries rather than the United States. The effects of modifying the application of the Jones Act for Puerto Rico are highly uncertain, and various trade-offs could materialize depending on how the Act is modified. Under a full exemption from the Act, the rules and requirements that would apply to all carriers would need to be determined. While proponents of this change expect increased competition and greater availability of vessels to suit shippers' needs, it is also possible that the reliability and other beneficial aspects of the current service could be affected. Furthermore, because of cost advantages, unrestricted competition from foreign-flag vessels could result in the disappearance of most U.S.-flag vessels in this trade, having a negative impact on the U.S. merchant marine and the shipyard industrial base that the Act was meant to protect. Instead of a full exemption, some stakeholders advocate an exemption from the U.S.-build requirement for vessels. According to proponents of this change, the availability of lower-cost, foreign-built vessels could encourage existing carriers to recapitalize their aging fleets (although one existing carrier has recently ordered two new U.S.-built vessels for this trade), and could encourage new carriers to enter the market. However, as with a full exemption, this partial exemption could also reduce or eliminate existing and future shipbuilding orders for vessels to be used in the Puerto Rico trade, having a negative impact on the shipyard industrial base the Act was meant to support.
Nuclear waste is long-lived and very hazardous—without protective shielding, the intense radioactivity of the waste can kill a person within minutes or cause cancer months or even decades after exposure. Thus, careful management is required to isolate it from humans and the environment. To accomplish this, the National Academy of Sciences first endorsed the concept of nuclear waste disposal in deep geologic formations in a 1957 report to the U.S. Atomic Energy Commission, which has since been articulated by experts as the safest and most secure method of permanent disposal. However, progress toward developing a geologic repository was slow until NWPA was enacted in 1983. Citing the potential risks of the accumulating amounts of nuclear waste, NWPA required the federal government to take responsibility for the disposition of nuclear waste and required DOE to develop a permanent geologic repository to protect public health and safety and the environment for current and future generations. Specifically, the act required DOE to study several locations around the country for possible repository sites and develop a contractual relationship with industry for disposal of the nuclear waste. The Congress amended NWPA in 1987 to restrict scientific study and characterization of a possible repository to only Yucca Mountain. (Fig. 2 shows the north crest of Yucca Mountain and a cut-out of the proposed mined repository.) After the Congress approved Yucca Mountain as a suitable site for the development of a permanent nuclear waste repository in 2002, DOE began preparing a license application for submittal to NRC, which has regulatory authority over commercial nuclear waste management facilities. DOE submitted its license application to NRC in June 2008, and NRC accepted the license application for review in September 2008. NWPA requires NRC to complete its review of DOE’s license application for the Yucca Mountain repository in 3 years, although a fourth year is allowed if NRC deems it necessary and complies with certain reporting requirements. To pay the nuclear power industry’s share of the cost for the Yucca Mountain repository, NWPA established the Nuclear Waste Fund, which is funded by a fee of one mill (one-tenth of a cent) per kilowatt-hour of nuclear-generated electricity that the federal government collects from electric power companies. DOE reported that, at the end of fiscal year 2008, the Nuclear Waste Fund contained $22 billion, with an additional $1.9 billion projected to be added in 2009. DOE receives money from the Nuclear Waste Fund through congressional appropriations. Additional funding for the repository comes from an appropriation which provides for the disposal cost of DOE-managed spent nuclear fuel and high-level waste. NWPA caps nuclear waste that can be disposed of at the Yucca Mountain repository at 70,000 metric tons until a second repository is available. However, the nation has already accumulated about 70,000 metric tons of nuclear waste at current reactor sites and DOE facilities. Without a change in the law to raise the cap or to allow the construction of a second repository, DOE can dispose of only the current nuclear waste inventory. The nation will have to develop a strategy for an additional 83,000 metric tons of waste expected to be generated if NRC issues 20-year license extensions to all of the currently operating nuclear reactors. This amount does not include any nuclear waste generated by new reactors or future defense activities, or greater than class C nuclear waste. According to DOE and industry studies, three to four times the 70,000 metric tons—and possibly more—could potentially be disposed safely in Yucca Mountain, which could address current and some future waste inventories, potentially delaying the need for a second repository for several generations. Nuclear waste has continued to accumulate at the nation’s commercial and DOE nuclear facilities over the past 60 years. Facility managers must actively manage the nuclear waste by continually isolating, confining, and monitoring it to keep humans and the environment safe. Most spent nuclear fuel is stored at reactor sites, immersed in pools of water designed to cool and isolate it from the environment. With nowhere to dispose of the spent nuclear fuel, the racks holding spent fuel in the pools have been rearranged to allow for more dense storage of assemblies. Even with this re-racking, spent nuclear fuel pools are reaching their capacities. Some critics have expressed concern about the remote possibility of an overcrowded spent nuclear fuel pool releasing large amounts of radiation if an accident or other event caused the pool to lose water, potentially leading to a fire that could disperse radioactive material. As reactor operators have run out of space in their spent nuclear fuel pools, they have turned in increasing number to dry cask storage systems that generally consist of stainless steel canisters placed inside larger stainless steel or concrete casks. (See fig. 3.) NRC requires protective shielding, routine inspections and monitoring, and security systems to isolate the nuclear waste to protect humans and the environment. NRC has determined that these dry cask storage systems can safely store nuclear waste, but NRC considers them to be interim measures. In 1990, NRC issued a revised waste confidence rule, stating that it had co that the waste generated by a reactor can be safely stored in either wet or dry storage for 30 years beyond a reactor’s life, including license extensions. NRC further determined that it had reasonable assurance thate safe geologic disposal was feasible and that a geologic repository would b operational by about 2025. More recently, NRC has published a notice of proposed rulemaking to revise that rule, proposing that waste generated by a reactor can be safely stored for 60 years beyond the life of a reac tor and that geologic disposal would be available in 50 to 60 years beyond a NRC is currently considering whether to republish its reactor’s life. proposed rule to seek additional public input on certain issues. Forty-fi reactor sites or former reactor sites in 30 states have dry storage faci for their spent nuclear fuel a sites storing spent nuclear fuel is likely to continue to grow until an alternative is implemented. s of June 2009, and the number of reactor Implementing a permanent, safe, and secure disposal solution for the nuclear waste is of concern to the nation, particularly state governmentsand local communities, because many of the 80 sites where nuclear wast e is currently stored are near large populations or major water sources or consist of shutdown reactor sites that tie up land that could be used for other purposes. In addition, states that have DOE facilities with nuclear waste storage are concerned because of possible contamination to aquifers, rivers, and other natural resources. DOE’s Hanford Reservation, located near Richland, Washington, was a major component of the nation nuclear weapons defense program from 1943 until 1989, when operat ions ceased. In the settlement of a lawsuit filed by the state of Washin 2003, DOE agreed not to ship certain nuclear waste to Hanford until environmental reviews were complete. In August 2009, the U.S. government stated that the preferred alternative in DOE’s environmen review would include limitations on certain nuclear waste shipments to Hanford until the process of immobilizing tank waste in glass begins, tal expected in 2019. Moreover, some commercial and DOE sites where the nuclear waste is stored may not be able to accommodate much additional waste safely because of limited storage space or community objections. These sites will require a more immediate solution. The nation has considered proposals to build centralized storage facilities where waste from reactor sites could be consolidated. The 1987 amendment to NWPA established the Office of the Nuclear Waste Negotiator to try to broker an agreement for a community to host a repository or interim storage facility. Two negotiators worked with local communities and Native American tribes for several years, but neither was able to conclude a proposed agreement with a willing community by January 1995, when the office’s authority expired. Subsequently, in 2006 after a 9-year licensing process, a consortium of electric power companies called Private Fuel Storage obtained a NRC license for a private centralized storage facility on the reservation of the Skull Valley Band of the Goshute Indians in Utah. NRC’s 20-year license—with an option for an additional 20 years—allows storage of up to 40,000 metric tons of commercial spent nuclear fuel. However, construction of the Private Fuel Storage facility has been delayed by Department of the Interior decisions not to approve the lease of tribal lands to Private Fuel Storage and declining to issue the necessary rights-of-way to transport nuclear waste to the facility through Bureau of Land Management land. Private Fuel Storage and the Skull Valley Band of Goshutes filed a federal lawsuit in 2007 to overturn Interior’s decisions. Reprocessing nuclear waste could potentially reduce, but not eliminate, the amount of waste for disposal. In reprocessing, usable uranium and plutonium are recovered from spent nuclear fuel and are used to make new fuel rods. However, current reprocessing technologies separate weapons usable plutonium and other fissionable materials from the spent nuclear fuel, raising concerns about nuclear proliferation by terrorists or enemy states. Although the United States pioneered the reprocessing technologies used by other countries, such as France and Russia, presidents Gerald Ford and Jimmy Carter ended government support for commercial reprocessing in the United States in 1976 and 1977, respectively, primarily due to proliferation concerns. Although President Ronald Reagan lifted the ban on government support in 1981, the nation has not embarked on any reprocessing program due to proliferation and cost concerns—the Congressional Budget Office recently reported that current reprocessing technologies are more expensive than direct disposal of the waste in a geologic repository. DOE’s Fuel Cycle Research and Development program is currently performing research in reprocessing technologies that would not separate out weapons usable plutonium, but it is not certain whether these technologies will become cost-effective. The general consensus of the international scientific community is that geologic disposal is the preferred long-term nuclear waste management alternative. Finland, Sweden, Canada, France, and Switzerland have decided to construct geologic disposal facilities, but none have yet completed any such facility, although DOE reports that Finland and Sweden have announced plans to begin emplacement operations in 2020 and 2023, respectively. Moreover, some countries employ a mix of complementary storage alternatives in their national waste management strategies, including on-site storage, consolidated interim storage, reprocessing, and geologic disposal. For example, Sweden plans to rely on on-site storage until the waste cools enough to move it to a centralized storage facility, where the waste will continue to cool and decay for an additional 30 years. This waste will then be placed in a geologic repository for disposal. France reprocesses the spent nuclear fuel, recycling usable portions as new fuel and storing the remainder for eventual disposal. The Yucca Mountain repository—mandated by NWPA, as amended— would provide a permanent nuclear waste management solution for the nation’s current inventory of about 70,000 metric tons of waste. According to DOE and industry studies, the repository potentially could be a disposal site for three to four times that amount of waste. However, the repository lacks the support of the administration and the state of Nevada, and faces regulatory and other challenges. Our analysis of DOE’s cost projections found that the Yucca Mountain repository would cost from $41 billion to $67 billion (in 2009 present value) for disposing of 153,000 metric tons of nuclear waste. Most of these costs are up-front capital costs. However, once the Yucca Mountain repository is closed—in 2151 for our 153,000- metric-ton model—it is not expected to incur any significant additional costs, according to DOE. The Yucca Mountain repository is designed to isolate nuclear waste in a safe and secure environment long enough for the waste to degrade into a form that is less harmful to humans and the environment. As nuclear waste ages, it cools and decays, becoming less radiologically dangerous. In October 2008, after years of legal challenges, the Environmental Protection Agency (EPA) promulgated standards that require DOE to ensure that radioactive releases from the nuclear waste disposed of at Yucca Mountain do not harm the public for 1 million years. This is because some waste components, such as plutonium 239, take hundreds of thousands of years to decay into less harmful materials. To meet EPA’s standards and keep the waste safely isolated, DOE’s license application proposes the use of both natural and engineered barriers. Key natural barriers of Yucca Mountain include its dry climate, the depth and isolation of the Death Valley aquifer in which the mountain resides, its natural physical shape, and the layers of thick rock above and below the repository that lie 1,000 feet below the surface of the mountain and 1,000 feet above the water table. Key engineered barriers include the solid nature of the nuclear waste; the double-shelled transportation, aging, and disposal canisters that encapsulate the waste and prevent radiation leakage; and drip shields that are composed of corrosion-resistant titanium to ward off any dripping water inside the repository for many thousands of years. The construction of a geologic repository at Yucca Mountain would provide a permanent solution for nuclear waste that could allow the government to begin taking possession of the nuclear waste in the near term—about 10 to 30 years. The nuclear power industry sees this as an important consideration in obtaining the public support necessary to build new nuclear power reactors. The industry is interested in constructing new nuclear power reactors because, among other reasons, of the growing demand for electricity and pressure from federal and state governments to reduce reliance on fossil fuels and curtail carbon emissions. Some electric power companies see nuclear energy as an important option for noncarbon emitting power generation. According to NRC, 18 electric power companies have filed license applications to construct 29 new nuclear reactors. Nuclear industry representatives, however, have expressed concerns that investors and the public will not support the construction of new nuclear power reactors without a final safe and secure disposition pathway for the nuclear waste, particularly if that waste is generated and stored near major waterways or urban centers. Moreover, having a permanent disposal option may allow reactor operators to thin- out spent nuclear fuel assemblies from densely packed spent fuel pools, potentially reducing the risk of harm to humans or the environment in the event of an accident, natural disaster, or terrorist event. In addition, disposal is the only alternative for some DOE and commercial nuclear waste—even if the United States decided to reprocess the waste— because it contains nuclear waste residues that cannot be used as nuclear reactor fuel. This nuclear waste has no safe, long-term alternative other than disposal, and the Yucca Mountain repository would provide a near- term, permanent disposal pathway for it. Moreover, DOE has agreed to remove spent nuclear fuel from at least two states by certain dates or face penalties. Specifically, DOE has an agreement with Colorado stating that if the spent nuclear fuel at Fort St. Vrain is not removed by January 1, 2035, the government will, subject to certain conditions, pay the state $15,000 per day until the waste is removed. In addition, the state of Idaho sued DOE to remove inventories of spent nuclear fuel stored at DOE’s Idaho National Laboratory. Under the resulting settlement DOE agreed to (1) remove the spent nuclear fuel by January 1, 2035, or incur penalties of $60,000 per day and (2) curtail or suspend future shipments of spent nuclear fuel to Idaho. Some of the spent nuclear fuel stored at the Idaho National Laboratory comes from refueling the U.S. Navy’s submarines and aircraft carriers, all of which are nuclear powered. Special facilities are maintained at the Idaho National Laboratory to examine naval spent nuclear fuel to obtain information for improving future fuel performance and to package the spent nuclear fuel following examination to make it ready for rail shipment to its ultimate destination. According to Navy officials, refueling these warships, which necessitates shipment of naval spent nuclear fuel from the shipyards conducting the refuelings to the Idaho National Laboratory, is part of the Navy’s national security mission. Consequently, curtailing or suspending shipments of spent nuclear fuel to Idaho raises national security concerns for the Navy. The Yucca Mountain repository would help the government fulfill its obligation under NWPA to electric power companies and ratepayers to take custody of the commercial spent nuclear fuel and provide a permanent repository using the Nuclear Waste Fund. When DOE missed its 1998 deadline to begin taking custody of the waste, owners of spent fuel with contracts for disposal services filed lawsuits asking the courts to require DOE to fulfill its statutory and contractual obligations by taking custody of the waste. Though a court decided that it would not order DOE to begin taking custody of the waste, the courts have, in subsequent cases, ordered the government to compensate the utilities for the cost of storing the waste. DOE projected that, based on a 2020 date for beginning operations at Yucca Mountain, the government’s liabilities from the 71 lawsuits filed by electric power companies could sum to about $12.3 billion, though the outcome of pending and future litigation could substantially affect the ultimate total liability. DOE estimates that the federal government’s future liabilities will average up to $500 million per year. Furthermore, continued delays in DOE’s ability to take custody of the waste could result in additional liabilities. Some experts noted that without immediate plans for a permanent repository, reactor operators and ratepayers may demand that the Nuclear Waste Fund be refunded. Finally, disposing of the nuclear waste now in a repository facility would reduce the uncertainty about the willingness or the ability of future generations to monitor and maintain multiple surface waste storage facilities and would eliminate the need for any future handling of the waste. As a 2001 report of the National Academies noted, continued storage of nuclear waste is technically feasible only if those responsible for it are willing and able to devote adequate resources and attention to maintaining and expanding the storage facilities, as required to keep the waste safe and secure. DOE officials noted that the waste packages at Yucca Mountain are designed to be retrievable for more than 100 years after emplacement, at which time DOE would begin to close the repository, allowing future generations to consider retrieving spent nuclear fuel for reprocessing or other uses. However, the risks and costs of retrieving the nuclear waste from Yucca Mountain are uncertain because planning efforts for retrieval are preliminary. Once closed, Yucca Mountain will require minimal monitoring and little or no maintenance, and all future controls will be passive. Some experts stated that the current generation has a moral obligation to not pass on to future generations the extensive technical and financial responsibilities for managing nuclear waste in surface storage. There are many challenges to licensing and constructing the Yucca Mountain repository, some of which could delay or potentially terminate the program. First, in March 2009, the Secretary of Energy stated that the administration planned to terminate the Yucca Mountain repository and to form a panel of experts to review alternatives. During the testimony, the Secretary stated that Yucca Mountain would not be considered as one of the alternatives. The administration’s fiscal year 2010 budget request for Yucca Mountain was $197 million, which is $296 million less than what DOE stated it needs to stay on its schedule and open Yucca Mountain by 2020. In July 2009 letters to DOE, the Nuclear Energy Institute and the National Association of Regulatory Utility Commissioners raised concerns that, despite the announced termination of Yucca Mountain, DOE still intended on collecting fees for the Nuclear Waste Fund. The letters requested that DOE suspend collection of payments to the Nuclear Waste Fund. Some states have raised similar concerns and legislators have introduced legislation that could hold payments to the Nuclear Waste Fund until DOE begins operating a federal repository. Nevertheless, NWPA still requires DOE to pursue geologic disposal at Yucca Mountain. If the administration continues the licensing process for Yucca Mountain, DOE would face a variety of other challenges in licensing and constructing the repository. Many of these challenges—though unique to Yucca Mountain—might also apply in similar form to other future repositories, should they be considered. One of the most significant challenges facing DOE is to satisfy NRC that Yucca Mountain meets licensing requirements, including ensuring the repository meets EPA’s radiation standards over the required 1 million year time frame, as implemented by NRC regulation. For example, NRC’s regulations require that DOE model its natural and engineered barriers in a performance assessment, including how the barriers will interact with each other over time and how the repository will meet the standards even if one or more barriers do not perform as expected. NRC has stated that there are uncertainties inherent in the understanding of the performance of the natural and engineered barriers and that demonstrating a reasonable expectation of compliance requires the use of complex predictive models supported by field data, laboratory tests, site-specific monitoring, and natural analog studies. The Nuclear Waste Technical Review Board has also stated that the performance assessment may be “the most complex and ambitious probabilistic risk assessment ever undertaken” and the Board, as well as other groups or individuals, have raised technical concerns about key aspects of the engineered or natural barriers in the repository design. DOE and NRC officials also stated that budget constraints raise additional challenges. DOE officials told us that past budget shortfalls and projected future low budgets for the Yucca Mountain repository create significant challenges in DOE’s ability to meet milestones for licensing and for responding to NRC’s requests for additional information related to the license application. In addition, NRC officials told us budget shortfalls have constrained their resources. Staff members they originally hired to review DOE’s license application have moved to other divisions within NRC or have left NRC entirely. NRC officials stated that the pace of the license review is commensurate with funding levels. Some experts have questioned whether NRC can meet the maximum 4-year time requirement stipulated in NWPA for license review and have pointed out that the longer the delays in licensing Yucca Mountain, the more costly and politically vulnerable the effort becomes. In addition, the state of Nevada and other groups that oppose the Yucca Mountain repository have raised technical points, site-specific concerns, and equity issues and have taken steps to delay or terminate the repository. For example, Nevada’s Agency for Nuclear Projects questioned DOE’s reliance on engineered barriers in its performance assessment, indicating that too many uncertainties exist for DOE to claim human-made systems will perform as expected over the time frames required. In addition, the agency reported that Yucca Mountain’s location near seismic and volcanic zones creates additional uncertainty about DOE’s ability to predict a recurrence of seismic or volcanic events and to assess the performance of its waste isolation barriers should those events occur some time during the 1-million-year time frame. The agency also has questioned whether Yucca Mountain is the best site compared with other locations and has raised issues of equity, since Nevada is being asked to accept nuclear waste generated in other states. In addition to the Agency for Nuclear Projects’ issues, Nevada has taken other steps to delay or terminate the project. For example, Nevada has denied the water rights DOE needs for construction of a rail spur and facility structures at Yucca Mountain. DOE officials told us that constructing the rail line or the facilities at Yucca Mountain without those water rights will be difficult. Our analysis of DOE’s cost estimates found that (1) a 70,000 metric ton repository is projected to cost from $27 to $39 billion in 2009 present value over 108 years and (2) a 153,000 metric ton repository is projected to cost from $41 to $67 billion and take 35 more years to complete. These estimated costs include the licensing, construction, operation, and closure of Yucca Mountain for a period commensurate with the amount of waste. Table 1 shows each scenario with its estimated cost range over time. As shown in figure 4, the Yucca Mountain repository costs are expected to be high during construction, followed by reduced, but consistent costs during operations, substantially reduced costs for monitoring, then a period of increased costs for installation of the drip shields, and finally costs tapering off for closure. Once the drip shields are installed, by design, the waste packages will no longer be retrievable. After closure, Yucca Mountain is not expected to incur any significant additional costs. Costs for the construction of a repository, regardless of location, could increase based on a number of different scenarios, including delays in license application, funding shortfalls, and legal or technical issues that cause delays or changes in plans. For example, we asked DOE to assess the cost of a year’s delay in license application approval from the current 3 years to 4 years, the maximum allowed by NWPA. DOE officials told us that each year of delay would cost DOE about $373 million in constant 2009 dollars. Although the experts with whom we consulted did not agree on how long the licensing process for Yucca Mountain might take, several experts told us that the 9 years it took Private Fuel Storage to obtain its license was not unreasonable. This licensing time frame may not directly apply to the Yucca Mountain repository because the repository has a significantly different licensing process and regulatory scheme, including extensive pre-licensing interactions, a federal funding stream, and an extended compliance period and, because of the uncertainties, could take shorter or longer than the Private Fuel Storage experience. A nine-year licensing process for construction authorization would add an estimated $2.2 billion to the cost of the repository, mostly in costs to maintain current systems, such as project support, safeguards and security, and its licensing support network. In addition to consideration of the issuance of a construction authorization, NRC’s repository licensing process involves two additional licensing actions necessary to operate and close a repository, each of which allows for public input and could potentially adversely affect the schedule and cost of the repository. The second action is the consideration of an updated DOE application for a license to receive and possess high-level radioactive waste. The third action is the consideration of a DOE application for a license amendment to permanently close the repository. Costs could also increase if unforeseen technical issues developed. For example, some experts told us that the robotic emplacement of waste packages could be difficult because of the heat and radiation output from the nuclear waste, which could impact the electronics on the machinery. DOE officials acknowledged the challenges and told us the machines would have to be shielded for protection. They noted, however, that industry has experience with remote handling of shielded robotic machinery and DOE should be able to use that experience in developing its own machinery. The responsibility for Yucca Mountain’s costs would come from the Nuclear Waste Fund and taxpayers through annual appropriations. NWPA created the Nuclear Waste Fund as a mechanism for the nuclear power industry to pay for its share of the cost for building and operating a permanent repository to dispose of nuclear waste. NWPA also required the federal taxpayers to pay for the portion of permanent repository costs for DOE-managed spent nuclear fuel and high-level waste. DOE has responsibility for determining on an annual basis whether fees charged to industry to finance the Nuclear Waste Fund are sufficient to meet industry’s share of costs. As part of that process, DOE developed a methodology in 1989 that uses the total system life cycle cost estimate as input for determining the shares of industry and the federal government by matching projected costs against projected assets. The most recent published assessment, published in July 2008, showed that 80.4 percent of the disposal costs would come from the Nuclear Waste Fund and 19.6 percent would come from appropriations for the DOE-managed spent nuclear fuel and high-level waste. In addition, the Department of the Treasury’s judgment fund will pay the government’s liabilities for not taking custody of the nuclear waste in 1998, as required by DOE’s contract with industry. Based on existing judgments and settlements, DOE has estimated these costs at $12.3 billion through 2020 and up to $500 million per year after that, though the outcome of pending litigation could substantially affect the government’s ultimate liability. The Department of Justice has also spent about $150 million to defend DOE in the litigation. We used input from experts to identify two nuclear waste management alternatives that could be implemented if the nation does not pursue disposal at Yucca Mountain—centralized storage and continued on-site storage, both of which could be implemented with final disposal, according to experts. To understand the implications and likely assumptions of each alternative, as well as the associated costs for the component parts, we systematically solicited facts, advice, and opinions from experts in nuclear waste management. Finally, we used the data and assumptions that the experts provided to develop large-scale cost models that estimate ranges of likely total costs for each alternative. To identify waste management alternatives that could be implemented if the waste is not disposed of at Yucca Mountain, we solicited facts, advice, and opinions from nuclear waste management experts. Specifically, we interviewed dozens of experts from DOE, NRC, the Nuclear Energy Institute, the National Association of Regulatory Utility Commissioners, the National Conference of State Legislatures, and the State of Nevada Agency for Nuclear Projects. We also reviewed documents they provided or referred us to. Based on this information, we chose to analyze (1) centralized interim dry storage and (2) on-site dry storage (both interim and long-term). Centralized storage has been attempted to varying degrees in the United States, and on-site storage has become the country’s status quo. Consequently, the experts believe these two alternatives are currently among the most likely for this country in the near-term, in conjunction with final disposal in the long-term. The experts also told us that current nuclear waste reprocessing technologies raise proliferation concerns and are not considered commercially feasible, but they noted that reprocessing has future potential as a part of the nation’s nuclear waste management strategy. Because nuclear waste is not reprocessed in this country, we found a lack of sufficient and reliable data to provide meaningful analysis for this alternative. Experts have largely dismissed other alternatives that have been identified, such as disposal of waste in deep boreholes, because of cost or technical constraints. We developed a set of key assumptions to establish the scope of our alternatives by initially consulting with a small group of nuclear waste management experts. For example, we asked the experts about how many storage sites should be used and whether waste would have to be repackaged. These discussions occurred in an iterative manner—we followed up with experts with specific expertise to refine our assumptions as we learned more. Based on this input, we formulated several key assumptions and defined the alternatives in a generic manner by taking into account some, but not all, of the complexities involved with nuclear waste management (see table 2). We made this choice because experts advised us that trying to consider all of the variability among reactor sites would result in unmanageable models since each location where nuclear waste is currently stored has a unique set of environmental, management, and regulatory considerations that affect the logistics and costs of waste management. For example, reactor sites use different dry cask storage systems with varying costs that require different operating logistics to load the casks. In addition, there were some instances in which we made assumptions that, while not entirely realistic, were necessary to keep our alternatives generic and distinct from one another. For example, some electric power companies would likely consolidate nuclear waste from different locations by transporting it between reactor sites, but to keep the on-site storage alternative generic and distinct from the centralized storage alternative, we assumed that there would be no consolidation of waste. These simplifying assumptions make our alternatives hypothetical and not entirely representative of their real-world implementation. We also consulted with experts to formulate more specific assumptions about processes that reflect the sequence of activities that would occur within each alternative (see fig. 5). In addition, we identified the components of these processes that have associated costs. For example, one of the processes associated with both alternatives is packaging the nuclear waste in dry storage canisters from the pools of water where they are stored. The component costs associated with this process include the dry storage canisters and operations to load the spent nuclear fuel into the canisters. To generate cost ranges for the centralized storage and on-site storage alternatives, we developed four large-scale cost models that analyzed the costs for each alternative of storing 70,000 metric tons and 153,000 metric tons of nuclear waste and created scenarios within these models to analyze different storage durations and final dispositions. (See table 3.) We generated cost ranges for each alternative for storing 153,000 metric tons of waste for 100 years followed by disposal in a geologic repository. We also generated cost ranges for each alternative of storing 70,000 metric tons and 153,000 metric tons of nuclear waste for 100 years, and for storing 153,000 metric tons of waste on site for 500 years without including the cost of subsequent disposal in a geologic repository. For each of the models, which rely upon data and assumptions provided by nuclear waste management experts, the cost range was based on the annual volume of commercial spent nuclear fuel that became ready to be packaged and stored in each year. In general, each model started in 2009 by annually tracking costs of initial packaging and related costs for the first 100 years and for every 100 years thereafter if the waste was to remain on site and be repackaged. Since our models analyzed only the costs associated with storing commercial nuclear waste management, we augmented them with DOE’s cost data for (1) managing its spent nuclear fuel and high-level waste and (2) constructing and operating a permanent repository. Specifically, we used DOE’s estimated costs for the Yucca Mountain repository to represent cost for a hypothetical permanent repository. One of the inherent difficulties of analyzing the cost of any nuclear waste management alternative is the large number of uncertainties that need to be addressed. In addition to general uncertainty about the future, there is uncertainty because of the lack of knowledge about the waste management technologies required, the type of waste and waste management systems that individual reactors will eventually employ, and cost components that are key inputs to the models and could occur over hundreds or thousands of years. Given these numerous uncertainties, it is not possible to precisely determine the total costs of each alternative. However, much of the uncertainty that we could not easily capture within our models can be addressed through the use of several alternative models and scenarios. As shown in table 3, we developed two models for each alternative to address the uncertainty regarding the total volume of waste for disposal. We then developed different scenarios within each model to address different time frames and disposal paths. Furthermore, we used a risk analysis modeling technique that recognized and addressed uncertainties in our data and assumptions. Given the different possible scenarios and uncertainties, we generated ranges, rather than point estimates, for analyzing the cost of each alternative. One of the most important uncertainties in our analysis was uncertainty over component costs. To address this, we used a commercially available risk analysis software program that enabled us to model specific uncertainties associated with a large number of cost inputs and assumptions. Using a Monte Carlo simulation process, the program explores a wide range of values, instead of one single value, for each cost input and estimates the total cost. By repeating the calculations thousands of times with a different set of randomly chosen input values, the process produces a range of total costs for each alternative and scenario. The process also specifies the likelihood associated with values in the estimated range. Another inherent difficulty in estimating the cost of nuclear waste management alternatives is the fact that the costs are spread over hundreds or thousands of years. The economic concept of discounting is central to such long-term analysis because it allows us to convert costs that occur in the distant future to present value—equivalent values in today’s dollars. Although the concept of discounting is an accepted and standard methodology in economics, the concept of discounting values over a very distant future—known as “intergenerational discounting”—is still subject to considerable debate. Furthermore, no consensus exists among economists regarding the exact value of the discount rate that should be used to discount values that are spread over many hundreds or thousands of years. To develop an appropriate discounting methodology and to choose the discount rates for our analysis, we reviewed a number of economic studies published in peer-reviewed journals that addressed intergenerational discounting. Based on our review, we designed a discounting methodology for use in our models. Because our review did not find a consensus on discount rates, we used a range of values for discount rates that we developed based on the economic studies we reviewed, rather than using one single rate. Consequently, because we used ranges for the discount rate along with the Monte Carlo simulation process, the present value of estimated costs does not depend on one single discount rate, but rather reflect a range of discount rate values taken from peer-reviewed studies. (See app. IV for details of our modeling and discounting methodologies, assumptions, and results.) Centralized storage would provide a near-term alternative for managing nuclear waste, allowing the government to begin taking possession of the waste within approximately the next 30 years, and giving additional time for the nation to consider long-term waste management options. However, centralized storage does not preclude the need for final disposal of the waste. In addition, centralized storage faces several implementation challenges including that DOE (1) lacks statutory authority to provide centralized storage under NWPA, (2) is expected to have difficulty finding a location willing to host a centralized storage facility, and (3) faces potential transportation risks. The estimated cost of implementing centralized storage for 100 years ranges from $15 billion to $29 billion for 153,000 metric tons of nuclear waste, and the total cost ranges from $23 billion to $81 billion if the nuclear waste is centrally stored and then disposed in a geologic repository. As the administration re-examines the Yucca Mountain repository and national nuclear waste policy, centralized dry cask storage could provide a near-term alternative for managing the waste that has accumulated and will continue to accumulate. This would provide additional time—NRC has stated that spent nuclear fuel storage is safe and environmentally acceptable for a period on the order of 100 years—to consider other long- term options that may involve alternative policies and new technologies and allow some flexibility for their implementation. For example, centralized storage would maintain nuclear waste in interim dry storage configurations so that it could be easily accessible for reprocessing in case the nation decided to pursue reprocessing as a waste management option and developed technologies that address current proliferation and cost concerns. In fact, reprocessing facilities could be built near or adjacent to centralized facilities to maximize efficiencies. However, even with reprocessing, some of the spent nuclear fuel and high-level waste in current inventories would require final disposal. Centralized storage would consolidate the nation’s nuclear waste after reactors are decommissioned, thereby decreasing the complexity of securing and overseeing the waste and increasing the efficiency of waste storage operations. This alternative would remove nuclear waste from all DOE sites and nine shutdown reactor sites that have no operations other than nuclear waste storage, allowing these sites to be closed. Some of these storage sites occupy land that potentially could be used for other purposes, imposing an opportunity cost on states and communities that no longer receive the benefits of electricity generation from the reactors. To compensate for this loss, industry officials noted that at least two states where decommissioned sites are located have tried to raise property taxes on the sites, and at one site, the state collects a per cask fee for storage. In addition, the continued storage of nuclear waste at decommissioned sites can cost the power companies between about $4 million and $8 million per year, according to several experts. Centralized storage could allow reactor operators to thin-out spent nuclear fuel assemblies from densely packed spent fuel pools and may also prevent operating reactors from having to build the additional dry storage capacity they would need if the nuclear waste remained on site. According to an industry official, 28 reactor sites could have to add dry storage facilities over the next 10 years in order to maintain a desired capacity in their storage pools. These dry storage facilities could cost about $30 million each, but this cost would vary widely by site. In addition, some current reactor sites use older waste storage systems and are near large cities or large bodies of fresh water used for drinking or irrigation. Although NRC’s licensing and inspection process is designed to ensure that these existing facilities appropriately protect public health and safety, new centralized facilities could use state-of-the-art design technology and be located in remote areas with fewer environmental hazards, in order to protect public health and enhance safety. Finally, if DOE uses centralized facilities to store commercial spent nuclear fuel, this alternative could allow DOE to fulfill its obligation to take custody of the commercial spent nuclear fuel until a long-term strategy is implemented. As a result, DOE could curtail its liabilities to the electric power companies, potentially saving the government up to $500 million per year after 2020, as estimated by DOE. The actual impact of centralized storage on the amount of the liabilities would depend on several factors, including when centralized storage is available, whether reactor sites had already built on-site dry storage facilities for which the government may be liable for a portion of the costs, how soon waste could be transported to a centralized site, and the outcome of pending litigation that may affect the government’s total liability. DOE estimates that if various complex statutory, regulatory, siting, construction, and financial issues were expeditiously resolved, a centralized facility to accept nuclear waste could begin operations as early as 6 years after its development began. However, a centralized storage expert estimated that the process from site selection until a centralized facility opens could take between 17 and 33 years. Although centralized storage has a number of positive attributes, it provides only an interim alternative and does not eliminate the need for final disposal of the nuclear waste. To keep the waste safe and secure, a centralized storage facility relies on active institutional controls, such as monitoring, maintenance, and security. Over time, the storage systems may degrade and institutional controls may be disrupted, which could result in increased risk of radioactive exposure to humans or the environment. For example, according to several experts on dry cask systems, the vents on the casks—which allow for passive cooling—must be periodically inspected to ensure no debris clogs them, particularly during the first several decades when the spent nuclear fuel is thermally hot. If the vents become clogged, the temperature in the canister could rise, which could impact the life of the dry cask storage system. Over a longer time frame, concrete on the exterior casks could degrade, requiring more active maintenance. Although some experts stated that the risk of radiation being released into the environment may be low, such risks can be avoided by permanently isolating the waste in a manner that does not require indefinite, active institutional controls, such as disposal in a geologic repository. A key challenge confronting the centralized storage alternative is the lack of authority under NWPA for DOE to provide such storage. Provisions in NWPA that allow DOE to arrange for centralized storage have either expired or are unusable because they are tied to milestones in repository development that have not been met. For example, NWPA authorized DOE to provide temporary storage for a limited amount of spent nuclear fuel until a repository was available, but this authority expired in 1990. Some industry representatives have stated that DOE still has the authority to accept and store spent nuclear fuel under the Atomic Energy Act of 1954, as amended, but DOE asserts that NWPA limits its authority under the Atomic Energy Act. In addition, NWPA provided authority for DOE to site, construct, and operate a centralized storage facility, but such a facility could not be constructed until NRC authorized construction of the Yucca Mountain repository, and the facility could only store up to 10,000 metric tons of nuclear waste until the repository started accepting spent nuclear fuel. Therefore, unless provisions in NWPA were amended, centralized storage would have to be funded, owned, and operated privately. A privately operated centralized storage facility alternative, such as the proposed Private Fuel Storage Facility in Utah, would not likely resolve DOE’s liabilities with the nuclear power companies. A second, equally important, challenge to centralized storage is the likelihood of opposition during site selection for a facility. Experts noted that affected states and communities would raise concerns about safety, security, and the likelihood that an interim centralized storage facility could become a de facto permanent storage site if progress is not being made on a permanent repository. Even if a local community supports a centralized storage facility, the state may not. For example, the Private Fuel Storage facility was generally supported by the Skull Valley Band of the Goshute Indians, on whose reservation the facility was to be located, but the state of Utah and some tribal members opposed its licensing and construction. Other states have indicated their opposition to involuntarily hosting a centralized facility through means such as the Western Governors’ Association, which issued a resolution stating that “no such facility, whether publicly or privately owned, shall be located within the geographic boundaries of a Western state without the written consent of the governor.” Some experts noted that a state or community may be willing to serve as a host if substantial economic incentives were offered and if the party building the site undertook a time-consuming and expensive process of site characterization and safety assessment. However, DOE officials stated that in their previous experience—such as with the Nuclear Waste Negotiator about 15 to 20 years ago—they have found no incentive package that has successfully encouraged a state to voluntarily host a site. A third challenge to centralized storage is that nuclear waste would likely have to be transported twice—once to the centralized site and once to a permanent repository—if a centralized site were not colocated with a repository. Therefore, the total distance over which nuclear waste is transported is likely to be greater than with other alternatives, an important factor because, according to one expert, transportation risk is directly tied to this distance. However, according to DOE, nuclear waste has been safely transported in the United States since the 1960s and National Academy of Sciences, NRC, and DOE-sponsored reports have found that the associated risks are well understood and generally low. Yet, there are also perceived risks associated with nuclear waste transportation that can result in lower property values along transportation routes, reductions in tourism, and increased anxiety that create community opposition to nuclear waste transportation. According to experts, transportation risks could be mitigated through such means as shipping the least radioactive fuel first, using trains that only transport nuclear waste, and identifying routes that minimize possible impacts on highly populated areas. In addition, the hazards associated with transportation from a centralized facility to a repository would decline as the waste decayed and became less radioactive at the centralized facility. As shown in table 4, our models generated cost ranges from $23 billion to $81 billion for the centralized storage of 153,000 metric tons of spent nuclear fuel and high-level waste for 100 years followed by geologic disposal. For centralized storage without disposal, costs would range from $12 billion to $20 billion for 70,000 metric tons of waste and from $15 billion to $29 billion for 153,000 metric tons of waste. These centralized model scenarios include the cost of on-site operations required to package and prepare the waste for transportation, such as storing the waste in dry- cask storage until it is transported off site, developing and operating a system to transport the waste to centralized storage, and constructing and operating two centralized storage facilities. (See app. IV for information about our modeling methodology, assumptions, and results.) Actual centralized storage costs may be more or less than these cost ranges if a different centralized storage scenario is implemented. For example, our models assume that there would be two centralized facilities, but licensing, construction, and operations and maintenance costs would be greater if there were more than two facilities and lower if there was only one facility. Some experts told us that centralized storage would likely be implemented with only one facility because it would be too difficult to site two. But other experts noted that having more sites could reduce the number of miles traveled by the waste and provide a greater degree of geographic equity. The length of time the nuclear waste is stored could also impact the cost ranges, particularly if the nuclear waste were stored for less than or more than the time period assumed in our model. For periods longer than 100 years, experts told us that the dry storage cask systems may be subject to degradation and require repackaging, substantially raising the costs, as well as the level of uncertainty in those costs. Transportation is another area where costs could vary if, for example, transportation was not by rail or if the transportation system differed significantly from what is assumed in our models. Furthermore, costs could be outside our ranges if the final disposition of the waste is different. Our scenario that includes geologic disposal is based on the current cost projections for Yucca Mountain, but these costs could be significantly different for another repository site or if much of the nuclear waste is reprocessed. A different geologic repository would have unique site characterization costs, may use an entirely different design than Yucca Mountain, and may be more or less difficult to build. Also, reprocessing could contribute significantly to the cost of an alternative. For example, we previously reported that construction of a reprocessing plant with an annual production throughput of 3,000 metric tons of spent nuclear fuel could cost about $44 billion. Studies analyzed by the Congressional Budget Office estimate that once a reprocessing plant is constructed, spent nuclear fuel could be reprocessed at between $610,000 and $1.4 million per-metric-ton, when adjusted to 2009 constant dollars. This would result in an annual cost of about $2 billion to $4 billion, assuming a throughput of 3,000 metric tons per year. Finally, the actual cost of implementing one of our centralized storage scenarios would likely be higher than our estimated ranges indicate because our models omit several location-specific costs. These costs could not be quantified in our generic models because we did not make an assumption about the specific location of the centralized facilities. For example, a few experts noted that incentives may be given a state or locality as a basis for allowing a centralized facility to be built, but the incentive amount may vary from location to location based on what agreement is reached. Also, several experts said that rail construction may be required for some locations, which could add significant cost depending on the distance of new rail line required at a specific location. Experts could not provide data for these location-dependent costs to any degree of certainty, so we did not use them in our models. Also, the funding source for government-run centralized storage is unclear. The Nuclear Waste Fund, which electric power companies pay into, was established by NWPA to fund a permanent repository and cannot be used to pay for centralized storage without amending the act. Without such a change, the cost for the federal government to implement this alternative would likely have to be borne by the taxpayers. On-site storage of nuclear waste provides an intermediate option to manage the waste until the government can take possession of it, requiring minimal effort to change from what the nation is currently doing to manage its waste. In the meantime, other longer term policies and strategies could be considered. Such strategies would eventually be required because the on-site storage alternative would not eliminate the need for final disposal of the waste. Some experts believe that legal, community, and technical challenges associated with on-site storage will intensify as the waste remains on site without plans for final disposition because, for example, communities are more likely to oppose recertification of on-site storage. The estimated cost to continue storing 153,000 metric tons of nuclear waste on site for 100 years range from $13 billion to $34 billion, and total costs would range from $20 billion to $97 billion if the nuclear waste is stored on site for 100 years and then disposed in a geologic repository. Because of delays in the Yucca Mountain repository, on-site storage has continued as the nation’s strategy for managing nuclear waste, thus its continuation would require minimal near-term effort and allow time for the nation to consider alternative long-term nuclear waste management options. This alternative maintains the waste in a configuration where it is readily retrievable for reprocessing or other disposition, according to an expert. However, like centralized storage, on-site storage is an interim strategy that relies on active institutional controls, such as monitoring, maintenance, and security. To permanently isolate the waste from humans and the environment without the need for active institutional controls some form of final disposal would be required, even if some of the waste were reprocessed. The additional time in on-site storage may also make the waste safer to handle because older spent nuclear fuel and high-level waste has had a chance to cool and become less radioactive. As a result, on-site storage could reduce transportation risks, particularly in the near-term, since the nuclear waste would be cooler and less radioactive when it is finally transported to a repository. In addition, some experts state that older, cooler waste may provide more predictability in repository performance and be some degree safer than younger, hotter waste. However, NRC cautioned that the ability to handle the waste more safely in the future also depends on other factors, including how the waste or waste packages might degrade over time. In particular, NRC stated that there are many uncertainties with the behavior of spent nuclear fuel as it ages, such as potential fracturing of the structural assemblies, possibly increasing the risks of release. If the waste has to be repackaged, for example, the process may require additional safety measures. Some experts noted that continuing to store nuclear waste on site would be more equitable than consolidating it in one or a few areas. As a result, the waste, along with its associated risks, would be kept in the location where the electrical power was generated, leaving the responsibility and risks of the waste in the communities that benefited from its generation. With on-site storage of DOE-managed spent nuclear fuel and high-level waste, DOE would have difficulty meeting enforceable agreements with states, which could result in significant costs being incurred the longer spent nuclear fuel remains on site. In addition to Idaho’s agreement to impose a penalty of $60,000 per day if spent nuclear fuel is not removed from the state by 2035, DOE has an agreement with Colorado stating that if the spent fuel at Fort St. Vrain is not removed by January 1, 2035, the government will, subject to certain conditions, pay the state $15,000 per day until it is removed. Other states where DOE spent nuclear fuel and high-level waste are currently stored may seek similar penalties if the spent fuel and waste remain on-site with no progress toward a permanent repository or centralized storage facility. A second challenge is the cost due to the government’s possible legal liabilities to commercial reactor operators. Leaving waste on site under the responsibility of the electric power companies does not relieve the government of its obligation to take custody of the waste, thus the liability debt could continue to mount. For every year after 2020 that DOE fails to take custody of the waste in accordance with its contracts with the reactor operators, DOE estimates that the government will continue to accumulate up to $500 million per year beyond the estimated $12 billion in liabilities that will have accrued up to that point; however, the outcome of pending litigation could substantially affect the government’s total liability. The government will no longer incur these costs if DOE takes custody of the waste. Some representatives from industry have stated that it is not practical for DOE to take custody of the waste at commercial reactor sites. Moreover, some electric power company executives have stated that their ratepayers are paying for DOE to provide a geologic repository through their contributions to the Nuclear Waste Fund, and the executives believe that simply taking custody of the waste is not sufficient. A DOE official stated that if DOE were to take custody of the waste on site, it would be a complex undertaking due to considerations such as liability for accidents. Third, continued use of on-site storage would likely also face community opposition. Some experts noted that without progress on a centralized storage facility or repository site to which waste will be moved, some state and local opposition to reactor storage site recertification will increase, and so will challenges to nuclear power companies’ applications for reactor license extensions and combined licenses to construct and operate new reactors. Also, experts noted that many commercial reactor sites are not suitable for long-term storage, and none has had an environmental review to assess the impacts of storing nuclear waste at the site beyond the period for which it is currently licensed. One expert noted that if on- site storage were to become a waste management policy, the long-term health, safety, and environmental risks at each site would have to be evaluated. Because waste storage would extend beyond the life of nuclear power reactors, decommissioned reactor sites would not be available for other purposes, and the former reactor operators may have to stay in business for the sole purpose of storing nuclear waste. Finally, although dry cask storage is considered reliable in the short term, the longer-term costs, maintenance requirements, and security requirements are not well understood. Many experts said waste packages will likely retain their integrity for at least 100 years, but eventually dry storage systems may begin to degrade and the waste in those systems would have to be repackaged. However, commercial dry storage systems have only been in existence since 1986, so nuclear utilities have little experience with long-term system degradation and requirements for repackaging. Some experts suggested that only the outer protective cask would require replacement, but the inner canister would not have to be replaced. Yet, other experts said that, over time, the inner canister would also be exposed to environmental conditions by vents in the outer cask, which could cause corrosion and require a total system replacement. In addition, experts disagreed on the relative safety risks and costs associated with using spent fuel pools to transfer the waste during repackaging compared to using a dry transfer system, which industry representatives said had not been used on a commercial scale. Finally, future security requirements for extended storage are uncertain because as spent nuclear waste ages and becomes cooler and less radioactive, it becomes less lethal to anyone attempting to handle it without protective shielding. For example, a spent nuclear fuel assembly can lose nearly 80 percent of its heat 5 years after it has been removed from a reactor, thereby reducing one of the inherent deterrents to thieves and terrorists attempting to steal or sabotage the spent nuclear fuel and potentially creating a need for costly new security measures. As shown in table 5, our models generated cost ranges from $20 billion to $97 billion for the on-site storage of 153,000 metric tons of spent nuclear fuel and high-level waste for 100 years followed by geologic disposal. For only on-site storage for 100 years without disposal, costs would range from $10 billion to $26 billion for 70,000 metric tons of waste and from $13 billion to $34 billion for 153,000 metric tons of waste. On-site storage costs would increase significantly if the waste were stored for longer periods— storing 153,000 metric tons on site for 500 years would cost from $34 billion to $225 billion—because it would have to be repackaged every 100 years for safety. The on-site storage model scenarios include the costs of on-site operations required to package the waste into dry canister storage, build additional dry storage at the reactor sites, prepare the waste for transportation, and operate and maintain the on-site storage facilities. Most of the costs for the first 100 years would result from the initial loading of materials into dry storage systems. (See app. IV for information on our modeling methodology, assumptions, and results.) Actual on-site storage costs may be more or less than these cost ranges if a different on-site storage scenario is implemented. For example, to keep it distinct from the centralized storage models, our on-site storage models assume that there would be no transportation or consolidation of waste between the reactor sites. However, several experts noted that in an actual on-site storage scenario, reactor operators would likely consolidate their waste to make operations more efficient and reduce costs. Also, as with the centralized storage alternative, costs for the on-site storage scenario that includes geologic disposal could differ for a repository site other than Yucca Mountain or for additional waste management technologies. Finally, our models did not include certain costs that were either location- specific or could not be predicted sufficiently to be quantified for our purposes, which would make the actual costs of on-site storage higher than our cost ranges. For example, the taxes and fees associated with on- site storage could vary significantly by state and over time. Also, repackaging operations in our 500-year on-site storage scenario would generate low-level waste that would require disposal. However, the amount of waste generated and the associated disposal costs could vary depending on the techniques used for repackaging. Finally, the total amount of the government’s liability for failure to begin taking spent nuclear fuel for disposal in 1998 will depend on the outcome of pending and future litigation. Like the centralized storage alternative, the funding source for the on-site storage alternative is uncertain. Currently, the reactor operators have been paying for the cost to store the waste, but have filed lawsuits to be compensated for storage costs of waste that the federal government was required to take title to under standard contracts. Payments resulting from these lawsuits have come from the Department of the Treasury’s judgment fund, which is funded by the taxpayer, because a court determined that the Nuclear Waste Fund could not be used to compensate electric power companies for their storage costs. Without legislative or contractual changes—such as allowing the Nuclear Waste Fund to be used for on-site storage—taxpayers would likely bear the ultimate costs for on-site storage. Developing a long-term national strategy for safely and securely managing the nation’s high-level nuclear waste is a complex undertaking that must balance health, social, environmental, security, and financial factors. In addition, virtually any strategy considered will face many political, legal, and regulatory challenges in its implementation. Any strategy selected will need to have geologic disposal as a final disposition pathway. In the case of the Yucca Mountain repository, these challenges have left the nation with nearly three decades of experience. In moving forward, whether the nation commits to the same or a different waste management strategy, federal agencies, industry, and policy makers at all levels of government can benefit from the lessons of Yucca Mountain. In particular, stakeholders can better understand the need for a sustainable national focus and community commitment. Federal agencies, industry, and policymakers may also want to consider a strategy of complementary and parallel interim and long-term disposal options—similar to those being pursued by some other nations—which might provide the federal government with maximum flexibility, since it would allow time to work with local communities and to pursue research and development efforts in key areas, such as reprocessing. We provided DOE and NRC with a draft of this report for their review and comment. In their written comments, DOE and NRC generally agreed with the report. (See apps. V and VI.) In addition, both DOE and NRC provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. We also discussed the draft report with representatives of the Nuclear Waste Technical Review Board, the Nuclear Energy Institute, and the State of Nevada Agency for Nuclear Projects. These representatives provided comments to clarify information in the draft report, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other appropriate congressional committees, the Secretary of Energy, the Chairman of NRC, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. For this report we examined (1) the key attributes, challenges, and costs of the Yucca Mountain repository; (2) alternative nuclear waste management approaches; (3) the key attributes, challenges, and costs of storing the nuclear waste at two centralized sites; and (4) the key attributes, challenges, and costs of continuing to store the nuclear waste at its current locations. To provide information on the key attributes and challenges of the Yucca Mountain repository, we reviewed documents and interviewed officials from the Department of Energy’s (DOE) Office of Civilian Radioactive Waste Management and Office of Environmental Management; the Nuclear Regulatory Commission’s (NRC) Division of Spent Fuel Storage and Transportation and Division of High Level Waste Repository Safety, both within the Office of Nuclear Material Safety and Safeguards; and the Department of Justice’s Civil Division. We also reviewed documents and interviewed representatives from the National Academy of Sciences, the Nuclear Waste Technical Review Board, and other concerned groups. Once we developed our preliminary analysis of Yucca Mountain’s key attributes and challenges, we solicited input from nuclear waste management experts. (See app. II for our methodology for soliciting comments from nuclear waste management experts and app. III for a list of these experts.) To analyze the costs for the Yucca Mountain repository through to closure, we started with the cost information in DOE’s Yucca Mountain Total System Lifecycle Cost report, which used 122,100 metric tons of nuclear waste in its analysis. We asked DOE officials to provide a breakdown of the component costs on a per-metric-ton basis that DOE used in the Total System Lifecycle Cost report. We used this information to calculate the costs of a repository at Yucca Mountain for 70,000 metric tons and 153,000 metric tons, changing certain component costs based on the ratio between 70,000 and 122,100 or 153,000 and 122,100. For example, we modified the cost of constructing the tunnels for emplacing the waste for the 70,000- metric-ton scenario by 0.57, the ratio of 70,000 metric tons to 122,100 metric tons. We applied this approach to component costs that would be impacted by the ratio difference, particularly for transporting and emplacing the waste and installing drip shields. We also incorporated DOE’s cost estimates for potential delays to licensing the Yucca Mountain repository into our analysis and made modifications to the analysis based on comments by cognizant DOE officials. Finally, we discounted DOE’s costs, which were in 2008 constant dollars, to 2009 present value using the methodology described in appendix IV. To examine and identify alternatives, we started with a series of interviews among federal and state officials and industry representatives. We also gathered and reviewed numerous studies and reports on managing nuclear waste— along with interviewing the authors of many of these studies— from federal agencies, the National Academy of Sciences, the Nuclear Waste Technical Review Board, the Massachusetts Institute of Technology, the American Physical Society, Harvard University, the Boston Consulting Group, and the Electric Power Research Institute. To better understand how commercial spent nuclear fuel is stored, we visited the Dresden Nuclear Power Plant in Illinois and the Hope Creek Nuclear Power Plant in New Jersey, which both store spent nuclear fuel in pools and in dry cask storage. We also visited DOE’s Savannah River Site in South Carolina and Fort St. Vrain site in Colorado to observe how DOE- managed spent nuclear fuel and high-level waste are processed and stored. As we began to identify potential alternatives to analyze, we shared our initial approach and methodology with nuclear waste management experts—including members of the National Academy of Sciences and the Nuclear Waste Technical Review Board to obtain their feedback—and revised our approach accordingly. Many of these experts advised us to develop generic, hypothetical alternatives with clearly defined assumptions about technology and environmental conditions. Industry representatives and other experts advised us that trying to account for the thousands of variables relating to geography, the environment, regional regulatory differences, or differences in business models would result in infeasible and unmanageable models. They also advised us against trying to predict changes in the future for technologies or environmental conditions because they would purely conjectural and fall beyond the scope of this analysis. Based on this information, we identified two generic, hypothetical alternatives to use as the basis of our analysis: centralized storage and on- site storage. Within each of these alternatives, we identified different scenarios that examined the costs associated with the management of 70,000 metric tons and 153,000 metric tons of nuclear waste and whether or not the waste is shipped to a repository for disposal after 100 years. Once we identified the alternatives, we again consulted with experts to establish assumptions regarding commercial spent nuclear fuel management and its associated components to define the scope and specific processes that would be included in each alternative. To identify a more complete, qualified list of nuclear waste management experts with relevant experience who could provide and critique this information, we used a technique known as snowballing. We started with experts in the field who were known to us, primarily from DOE, NRC, National Council of State Legislators, the State of Nevada Agency for Nuclear Projects, the Nuclear Energy Institute, and the National Association of Regulatory Utility Commissioners and asked them to refer us to other experts, focusing on U.S.-based experts. We then contacted these individuals and asked for additional referrals. We continued this iterative process until additional interviews did not lead us to any new names or we determined that the qualified experts in a given technical area had been exhausted. We conducted an initial interview with each of these experts by asking them questions about the nature and extent of their expertise and their views on the Yucca Mountain repository. Specifically, we asked each expert: What is the nature of your expertise? How many years have you been doing work in this area? Does your expertise allow you to comment on planning assumptions and costs of waste management related to storage, disposal, or transport? If you were to classify yourself in relation to the Yucca Mountain repository, would you classify yourself as a proponent, an opponent, an independent, an undecided or uncommitted, or some combination of these? We then narrowed our list down to those individuals who identified themselves or whom others identified as having current, nationally recognized expertise in areas of nuclear waste management that were relevant to our analysis. For balance, we ensured that we included experts who reflected (1) key technical areas of waste management; (2) a range of industry, government, academia, and concerned groups; and (3) a variety of viewpoints on the Yucca Mountain repository. (See app. III for 147 experts we contacted.) Once we developed our list of experts, we classified them into three groups: Those whose expertise would allow them to provide us with specific information and advice on the processes that should be included in each alternative and the best estimates of expected cost ranges for the components of each alternative, such as a typical or reasonable price for a dry cask storage. Those who could weigh in on these estimates, as well as give us insight and comments on assumptions that we planned to use to define our alternatives. Those whose expertise was not in areas of component costs, but who could nonetheless give us valuable information on other assumptions, such as transportation logistics. To define our alternatives and develop the assumptions and cost components we needed for our analysis, we started with the experts from the first group who had the most direct and reliable knowledge of the processes and costs associated with the alternatives we identified. This group consisted of seven experts and included federal government officials and representatives from industry. We worked closely with these experts to identify the key assumptions that would establish the scope of our alternatives, the more specific assumptions to identify the processes associated with each alternative, the components of these processes that we could quantify in terms of cost, and the level of uncertainty associated with each component cost. For example, two of the experts in this first group told us that for the on-site alternative, commercial reactor sites that did not already have independent spent nuclear fuel storage installations would have to build them during the next 10 years and that the cost for licensing, design, and construction of each installation would range from $24 million to $36 million. Once we had gathered our initial assumptions and cost components, we used a data collection instrument to solicit comments on them from all of our experts. We then used the experts’ comments to refine our assumptions and component costs. (See app. II for our methodology for consulting with this larger group of nuclear waste management experts.) DOE officials provided assumptions and cost data for managing DOE spent nuclear fuel and high-level waste, which we incorporated into our analysis of the centralized storage and on-site storage alternatives. These assumptions and cost information covered management of spent nuclear fuel and high-level waste at DOE’s Idaho National Laboratory, Hanford Reservation, Savannah River Site, and West Valley site. To gather information on the key attributes and challenges of our alternatives, we interviewed agency officials and nuclear waste management experts from industry, academic institutions, and concerned groups. We also reviewed the reports and studies and visited the locations that were mentioned in the previous section. To ensure that the attributes and challenges we developed were accurate, comprehensive, and balanced, we asked our snowballed list of experts to provide their comments on our work, using the data collection instrument that is described in appendix II. We used the comments that we received to expand the attributes or challenges on our list or, where necessary, to modify our characterization of individual attributes or challenges. To generate cost ranges for the centralized storage and on-site storage alternatives, we developed four large-scale cost models that analyzed the costs for each alternative of storing 70,000 metric tons and 153,000 metric tons of nuclear waste for 100 years followed by disposal in a geologic repository. (See app. IV.) We also generated cost ranges for each alternative of storing the waste for 100 years without including the cost of subsequent disposal in a geologic repository for storing 153,000 metric tons of waste on site for 500 years. For each model, which rely upon data and assumptions provided by nuclear waste management experts, the cost range was based on the annual volume of commercial spent nuclear fuel that became ready to be packaged and stored in each year. In general, each model started in 2009 by annually tracking costs of initial packaging and related costs for the first 100 years and for every 100 years thereafter if the waste was to remain on site and be repackaged. Since our models analyzed only the costs associated with storing commercial nuclear waste management, we augmented them with DOE’s cost data for (1) managing its spent nuclear fuel and high-level waste and (2) constructing and operating a permanent repository. Specifically, we used DOE’s estimated costs for the Yucca Mountain repository to represent cost for a hypothetical permanent repository. We conducted this performance audit from April 2008 to October 2009 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As discussed in appendix I, we gathered the assumptions and associated component costs used to define our nuclear waste management alternatives by consulting with experts in an iterative process of identifying initial assumptions and component costs and revising them based on expert comments. This appendix (1) describes the data collection instrument we used to obtain comments on the initial assumptions and component costs, (2) describes how we analyzed the comments and revised our assumptions, and (3) provides a list of the assumptions and cost data that we derived through this process and used in our cost models. To obtain comments from a broad group of nuclear waste management experts, we compiled the initial assumptions and component costs that we gathered from a small group of experts into a data collection instrument that included a description of the Yucca Mountain repository and our proposed nuclear waste management alternatives—on-site storage and centralized storage— and attributes and challenges associated with them; our initial assumptions that would identify and define the processes, time frames, and major components used to bound our hypothetical centralized and on-site storage alternatives; the major component costs of each alternative, including definitions and initial cost data; and components associated with each alternative with a high degree of uncertainty that we did not attempt to quantify in terms of costs. The data collection instrument asked the experts to answer specific questions about each piece of information that we provided (see table 6). We pretested our instrument with several individual experts to ensure that our questions were clear and would provide us with the information that we needed, and then refined the instrument accordingly. Next, we sent the instrument to 114 experts who were identified through our snowballing methodology (see apps. I and III). Each expert received the sections of our data collection instrument that included the attributes and challenges of the alternatives and the initial assumptions, but only those experts with the type and level of expertise to comment on costs received the cost component sections. We received 67 sets of comments from independent experts and experts representing industry, federal government, state governments, and other concerned groups. These experts also represented a range of viewpoints on the Yucca Mountain repository. Each of their responses was compiled into a database organized by each individual assumption or cost element for the on-site storage and centralized interim storage alternatives. To arrive at the final assumptions and cost component data for our models, we qualitatively analyzed the experts’ comments. The comments we received on the assumptions differed in nature from those we received on the component costs, so our analysis and disposition of comments differed slightly. For the assumptions, we took the comments on each assumption that were made when an expert did not believe it was entirely reasonable and grouped comments that were similar. We determined the relevance of a comment to our assumption based on whether the comment provided a basis upon which we could modify the assumption or was within the scope or capability of our models. For example, we received several comments about how an assumption may be affected by nuclear waste from new reactors, including potential liabilities if the Department of Energy (DOE) does not take custody of that waste, but in the key assumptions defining our alternatives, we explicitly excluded new reactors because we could not predict how many new reactors would be built, when they would operate, and the amount of waste that they would generate. For those comments that were relevant, we weighed the expertise of those making the comments and determined whether the balance of the comments warranted a modification to our preliminary assumption. In some instances, we conducted followup interviews with selected experts to clarify issues that the broad group of experts raised. For the component costs, we organized the comments on a particular component based on whether an expert thought the cost and uncertainty range was reasonable, too high, too low, the range was too broad, or the range was too narrow. We developed a ranking system to identify which experts had the greatest degree of direct experience or knowledge with the cost and weighed their comments accordingly to determine whether our preliminary cost should be modified. Also, we took into account the incidence of expert agreement or disagreement when deciding how much uncertainty to apply to a particular cost. Through this analysis, we determined that the preponderance of our preliminary assumptions and cost data were reasonable for use in our models either because the experts generally agreed it was reasonable, or the experts who thought it was reasonable had a greater degree of relevant expertise or knowledge than those who commented otherwise. However, some of the experts’ responses indicated that a modification to our model was needed. Table 7 presents a summary of the modifications we made to our model assumptions and cost data based on the expert comments received. U.S. Nuclear Waste Technical Review Board (member) National Academy of Sciences/Nuclear and Radiation Studies Board U.S. Nuclear Waste Technical Review Board (member) California State University, Northridge U.S. Nuclear Waste Technical Review Board (retired) (staff) DOE/Office of Civilian Radioactive Waste Management (retired) DOE/Office of Civilian Radioactive Waste Management Nuclear Regulatory Commission (NRC)/Division of Spent Fuel Storage and Transportation State of Nevada Agency for Nuclear Projects NRC/Office of Nuclear Security and Incident Response Dominion Resources, Inc. Idaho Department of Environmental Quality The Yankee Nuclear Power Companies U.S. Nuclear Waste Technical Review Board (member) M.S. Chu & Associates University of Nevada Las Vegas Nuclear and Radiation Studies Board, National Research Council of the National Academies U.S. Department of Justice/Civil Division NRC/Division of High Level Waste Repository Safety Lawrence Livermore National Laboratory (retired) Nuclear and Radiation Studies Board, National Research Council of the National Academies U.S. Nuclear Waste Technical Review Board (member) State of Nevada Nuclear Waste Project Office U.S. Nuclear Waste Technical Review Board (chairman) Department of Defense/Department of the Navy Bechtel SAIC Company, LLC University of New Mexico Nuclear and Radiation Studies Board, National Research Council of the National Academies Bechtel SAIC Company, LLC Transportation Advisor, State of Nevada Agency for Nuclear Projects DOE/Office of Civilian Radioactive Waste Management Bechtel SAIC Company, LLC Bechtel SAIC Company, LLC DOE/Office of Civilian Radioactive Waste Management Department of Defense/Department of the Navy U.S. Nuclear Waste Technical Review Board (member) Stanford University Lawrence Livermore National Laboratory Nuclear and Radiation Studies Board, National Research Council of the National Academies Council of State Governments, Midwestern Office U.S. Nuclear Waste Technical Review Board (member) NRC/Division of High Level Waste Repository Safety DOE/Office of Civilian Radioactive Waste Management Nye County, State of Nevada DOE/Office of Civilian Radioactive Waste Management DOE/Office of Civilian Radioactive Waste Management U.S. Nuclear Waste Technical Review Board (member) Institute for Energy and Environmental Research Department of Defense/Department of the Navy Carnegie Institution for Science Nuclear and Radiation Studies Board, National Research Council of the National Academies Department of Defense/Department of the Navy Department of Defense/Department of the Navy U.S. Nuclear Waste Technical Review Board (member) U.S. Nuclear Waste Technical Review Board (member) Utah Department of Environmental Quality Transnuclear, Inc. National Association of Regulatory Utility Commissioners Nuclear Information and Resource Service Bechtel SAIC Company, LLC University of California at Berkeley U.S. Nuclear Waste Technical Review Board (member) DOE/Office of Civilian Radioactive Waste Management U.S. Nuclear Waste Technical Review Board (staff) U.S. Nuclear Waste Technical Review Board (staff) National Conference of State Legislators Department of Defense/Department of the Navy DOE/Office of Civilian Radioactive Waste Management DOE/Office of Civilian Radioactive Waste Management Energy Resources International, Inc. Mike Thorne and Associates Limited Bechtel SAIC Company, LLC Dominion Resources, Inc. The methodology and results of the models we developed to analyze the total costs of two alternatives for managing nuclear waste are based on cost data and assumptions we gathered from experts. Specifically, this appendix contains information on the following: The modeling methodology we developed to generate a range of total costs for the two nuclear waste management alternatives with two different volumes of waste. The Monte Carlo simulation process we used to address uncertainties in input data. The discounting methodology we developed to derive the present value of total costs in 2009 dollars. The individual models and scenarios within each model. The results of our cost estimations for each scenario. Caveats to our modeling work. Appendixes I and II describe our methodology for collecting cost data and assumptions and how we ensured their reliability. The general framework for our models was an Excel spreadsheet that annually tracked all costs associated with packaging, transportation, construction, operation, and maintenance of nuclear waste facilities as well as repackaging of nuclear waste every 100 years when applicable. The starting time period for all models was the year 2009, but the end dates vary depending on the specifics of the scenario. The cost inputs were collected in constant 2008 dollars, but the range of total costs for each scenario was converted to and reported in 2009 present value dollars. Our analysis began with an estimate of existing and future annual volume of nuclear waste ready to be packaged and stored. We chose to model two amounts of waste: 70,000 metric tons and 153,000 metric tons. For ease of calculation, we converted all input costs to cost per-metric-ton of waste, when applicable. The total cost range for each scenario was developed in four steps. First, we developed the total costs for commercial spent nuclear fuel volumes of about 63,000 metric tons and 140,000 metric tons, respectively. Second, we added DOE cost data for its managed waste. Third, we discounted all annual costs to 2009 present value by a discounting methodology discussed later in this appendix. Finally, for scenarios where we assumed that the waste would be moved to a permanent repository after 100 years, we added DOE’s cost estimate for the Yucca Mountain repository to represent cost for a permanent repository. To ensure compatibility of cost data that DOE provided with cost ranges generated by our models, we converted DOE cost data to 2009 present value. To address the uncertainties inherent in our analysis, we used a commercially available risk analysis software program called Crystal Ball to incorporate uncertainties associated with the data. This program allowed us to explore a wide range of possible values for all the input costs and assumptions we used to build our models. The Crystal Ball program uses a Monte Carlo simulation process, which repeatedly and randomly selects values for each input to the model from a distribution specified by the user. Using the selected values for cells in the spreadsheet, Crystal Ball then calculates the total cost of the scenario. By repeating the process in thousands of trials, Crystal Ball produces a range of estimated total costs for each scenario as well as the likelihood associated with any specific value in the range. One of the inherent difficulties in developing the cost for a nuclear waste disposal option is that costs are spread over thousands of years. The economic concept of discounting is central to such analyses as it allows costs incurred in the distant future to be converted to present equivalent worth. We selected discount rates primarily based on results of studies published in peer reviewed journals. That is, rather than subjectively selecting a single discount rate, we developed our discounting approach based on a methodology and values for discount rates that were recommended by a number of published studies. We selected studies that addressed issues related to discounting activities whose costs and effects spread across the distant future or many generations, also known as “intergenerational discounting.” In general, we found that these studies were in near consensus on two points: (1) discounting is an appropriate methodology when analyzing projects and policies that span many generations and (2) rates for discounting the distant future should be lower than near term discount rates and/or should decline over time. However, we found no consensus among the studies as to any specific discount rate that should be used. Consequently, we developed a discounting methodology using the following steps: We divided the entire time frame of our analysis into five different discounting intervals: immediate, near future, medium future, far future, and far-far future. We assumed that within each interval the discount rates were distributed with a triangular distribution. Based on all published rates, we developed the maximum, minimum, and mode values for each of the five specified intervals. We discounted all costs, using Crystal Ball to randomly and repeatedly select a rate from the appropriate interval and discount cost values using a different rate for each trial. Using these steps, we discounted all annual costs to 2009 present value. Our methodology builds on a wide range of published rates from a number of different sources in concert with the Crystal Ball program. This enabled us, to the extent possible, to address the general lack of consensus on any specific discount rate and, at the same time, address the uncertainties that were inherent in intergenerational discounting and long-term analyses of nuclear waste management alternatives. We developed the following four models to estimate the cost of several hypothetical nuclear waste disposal alternatives, and we incorporated a number of scenarios within each model to address all uncertainties that we could not easily capture with Crystal Ball: Model I: Centralized storage for 153,000 metric tons, which included the Scenario 1: Centralized storage for 100 years. Scenario 2: Centralized storage for 100 years plus a permanent repository after 100 years. Model II: Centralized storage for 70,000 metric tons, which included one Scenario 1: Centralized storage for 100 years. Model III: On-site storage using total waste volume of 153,000 metric tons which included the following scenarios: Scenario 1: On-site storage for 100 years. Scenario 2: On-site storage for 100 years plus a permanent repository after 100 years. Scenario 3: On-site storage for 500 years. Model IV: On-site storage using total waste volume of 70,000 metric tons, which included one scenario: Scenario 1: On-site storage for 100 years. For this model we assumed that nuclear waste would remain on site until interim facilities are constructed and ready to receive the waste. Two centralized storage facilities would be constructed over 3 years—from 2025 through 2027—and then start accepting waste. The first scenario for this model includes the costs to store waste at the centralized facilities through 2108. In the second scenario, these facilities would stay in operation through 2155, or 47 years after a permanent repository for the waste would become available. The total analysis period for the cost of this alternative plus permanent repository continues until 2240, when a permanent repository would be expected to close. In general, the costs include the following: Initial costs, which include costs of casks, costs for loading of casks, cost of loading campaigns, and operating and maintenance costs by three types of nuclear sites, i.e., operating sites with dry storage, decommissioned sites with dry storage, and decommissioned sites with wet storage. The uncertainty ranges for these costs were from plus or minus 5 percent to plus or minus 50 percent, depending on specific cost variable. Costs associated with centralized facilities, including construction costs for centralized facilities, transportation cost for transfer of nuclear waste to centralized facilities, capital and operation and maintenance costs for transportation of waste to centralized facilities and operation and maintenance of centralized facilities. The uncertainty ranges for these costs are from plus or minus 10 percent to plus or minus 40 percent, depending on the cost category. This model was developed under the assumption that total existing and newly generated waste from the private sector and DOE will be 70,000 metric tons. The stream of new annual waste ready to be moved to dry storage will continue through 2030. The cost categories and uncertainty ranges assumed for this storage alternative are the same as those assumed in the centralized storage model for 153,000 metric tons. We developed this model under the assumption that total existing and newly generated nuclear waste by the private sector and DOE would be 153,000 metric tons. The stream of new waste ready to be moved to dry storage would continue through 2065. In general, the costs include the following: Initial costs, which include costs of casks, costs for loading of casks, cost of loading campaigns, and operating and maintenance costs by three types of nuclear sites, i.e., operating sites with dry storage, decommissioned sites with dry storage, and decommissioned sites with wet storage. The uncertainty ranges for these costs were from plus or minus 5 percent to plus or minus 50 percent, depending on specific cost variable. Repackaging costs, which include the costs for casks; construction of transfer facilities, site pools, and other needed infrastructure; and repackaging campaigns. Because these costs are first incurred after 100 years and then every 100 years thereafter, they are included only in the model scenarios covering more than 100 years. The uncertainty for these costs range from plus or minus 10 percent to plus or minus 50 percent, depending on the specific cost variable. Dry storage pad costs, including initial costs when dry storage is first established, as well as replacement costs. Because the replacement costs are first incurred after 100 years and then every 100 years thereafter, they are included only in the model scenarios covering more than 100 years. The cost of these pads, collectively referred to as independent spent fuel storage installations, include costs related to licensing, design, and construction of dry storage. The independent spent nuclear fuel storage installation costs have an uncertainty range of plus or minus 40 percent. We developed this model under the assumption that total existing and newly generated nuclear waste by the private sector and DOE will be 70,000 metric tons. The stream of new annual waste ready to be moved to dry storage will continue through 2030. The cost categories and uncertainty ranges assumed for this storage alternative are the same as those for the on-site model for storing 153,000 metric tons for 100 years. For two scenarios, we assumed that at the end of 100 years the nuclear waste would be transferred to a permanent repository for disposal. To estimate the cost for a repository, we used DOE’s cost data for the Yucca Mountain repository and made three adjustments to ensure compatibility with costs generated by our models. First, we included only DOE’s future cost estimates for the Yucca Mountain repository. Second, because DOE provided costs in 2008 constant dollars, we converted all costs for the permanent repository to costs to 2009 present value using corresponding ranges of interest rates as previously described in this appendix. Finally, we assumed that repository construction and operating costs would be incurred from 2098 to 2240 when we added these cost ranges to our alternatives after 100 years. Table 8 shows the results of our analysis for all scenarios. Figures 10 and 11 show ranges of total costs, as well as the probabilities for two selected scenarios. In the figures, each bar indicates a range of values for total cost and the height of the each bar indicates the probability associated with those values. Figure 12 shows the present value of the total cost ranges of storing the nuclear waste on site over 2,000 years. The shaded areas indicate the probability that the values fall within the indicated ranges and are the result of combinations of uncertainties from a large number of input data. Specifically, we estimate that these costs could range from $34 billion to $225 billion over 500 years, from $41 billion to $548 billion over 1,000 years, and from $41 billion to $954 billion over 2,000 years, indicating and substantial level of uncertainty in making long-term cost projections. Our models are based on ranges of average costs for each major cost category that is applicable to the alternative under analysis. As a result, the costs do not reflect storage costs for any specific site. Since we did not attempt to capture specific characteristics of each site, our values for any cost factor, if applied to any specific site, are likely incorrect. Nevertheless, since we used ranges rather than single values for a wide range of cost inputs to the models, we expect that our cost range for each variable includes the true cost for any specific site. Moreover, we expect the total cost point estimate for any scenario is within the range of total costs we developed. Our models are designed to develop total cost ranges for each scenario within each alternative, regardless of who will pay or is legally responsible for the costs. Issues related to assignment of the costs and potentially responsible entities are discussed elsewhere in this report but are not incorporated into our ranges. Also, our cost ranges focus on actual expenditures that would be incurred over the period of analysis and do not assume a particular funding source and do not necessarily represent costs to the federal government. Finally, because a number of cost categories are not included in our final estimated ranges, we cannot predict their impact on our final costs ranges. For example, we did not include (1) decontamination and decommissioning costs for existing facilities or facilities yet to be built within each scenario and (2) estimates for local and state taxes or fees, which would be required to establish new sites or for continued operation of on-site storage facilities after nuclear reactors are decommissioned. Table 8 and figures 10 and 11 present the results of our analysis by individual scenario. Because the purpose of our analysis was primarily to provide cost ranges for various nuclear waste management alternatives, we did not attempt to provide a comparison of results across scenarios. For a number of reasons, we believe such a comparison would have been misleading. The alternatives we have considered are inherently different in a large number of characteristics that could not be captured in our modeling work or they were not within the scope of our analysis. For example, differences in safety, health, and environmental effects, and ease of implementation characteristics of these alternatives should have an integral role in the policy debate on waste management decisions. However, because these effects cannot be readily quantified, they were outside the scope of our modeling work and are not reflected in the total cost ranges we generated. In addition to the individual named above, Richard Cheston, Assistant Director; Robert Sánchez; Ryan Gottschall; Carol Henn; Anne Hobson; Anne Rhodes-Kline; Mehrzad Nadji; Omari Norman; and Benjamin Shouse made key contributions to this report. Also contributing to this report were Nancy Kingsbury, Karen Keegan, and Timothy Persons.
High-level nuclear waste--one of the nation's most hazardous substances--is accumulating at 80 sites in 35 states. The United States has generated 70,000 metric tons of nuclear waste and is expected to generate 153,000 metric tons by 2055. The Nuclear Waste Policy Act of 1982, as amended, requires the Department of Energy (DOE) to dispose of the waste in a geologic repository at Yucca Mountain, about 100 miles northwest of Las Vegas, Nevada. However, the repository is more than a decade behind schedule, and the nuclear waste generally remains at the commercial nuclear reactor sites and DOE sites where it was generated. This report examines the key attributes, challenges, and costs of the Yucca Mountain repository and the two principal alternatives to a repository that nuclear waste management experts identified: storing the nuclear waste at two centralized locations and continuing to store the waste on site where it was generated. GAO developed models of total cost ranges for each alternative using component cost estimates provided by the nuclear waste management experts. However, GAO did not compare these alternatives because of significant differences in their inherent characteristics that could not be quantified. The Yucca Mountain repository is designed to provide a permanent solution for managing nuclear waste, minimize the uncertainty of future waste safety, and enable DOE to begin fulfilling its legal obligation under the Nuclear Waste Policy Act to take custody of commercial waste, which began in 1998. However, project delays have led to utility lawsuits that DOE estimates are costing taxpayers about $12.3 billion in damages through 2020 and could cost $500 million per year after 2020, though the outcome of pending litigation may affect the government's total liability. Also, the administration has announced plans to terminate Yucca Mountain and seek alternatives. Even if DOE continues the program, it must obtain a Nuclear Regulatory Commission construction and operations license, a process likely to be delayed by budget shortfalls. GAO's analysis of DOE's cost projections found that a repository to dispose of 153,000 metric tons would cost from $41 billion to $67 billion (in 2009 present value) over a 143-year period until the repository is closed. Nuclear power rate payers would pay about 80 percent of these costs, and taxpayers would pay about 20 percent. Centralized storage at two locations provides an alternative that could be implemented within 10 to 30 years, allowing more time to consider final disposal options, nuclear waste to be removed from decommissioned reactor sites, and the government to take custody of commercial nuclear waste, saving billions of dollars in liabilities. However, DOE's statutory authority to provide centralized storage is uncertain, and finding a state willing to host a facility could be extremely challenging. In addition, centralized storage does not provide for final waste disposal, so much of the waste would be transported twice to reach its final destination. Using cost data from experts, GAO estimated the 2009 present value cost of centralized storage of 153,000 metric tons at the end of 100 years to range from $15 billion to $29 billion but increasing to between $23 billion and $81 billion with final geologic disposal. On-site storage would provide an alternative requiring little change from the status quo, but would face increasing challenges over time. It would also allow time for consideration of final disposal options. The additional time in on-site storage would make the waste safer to handle, reducing risks when waste is transported for final disposal. However, the government is unlikely to take custody of the waste, especially at operating nuclear reactor sites, which could result in significant financial liabilities that would increase over time. Not taking custody could also intensify public opposition to spent fuel storage site renewals and reactor license extensions, particularly with no plan in place for final waste disposition. In addition, extended on-site storage could introduce possible risks to the safety and security of the waste as the storage systems degrade and the waste decays, potentially requiring new maintenance and security measures. Using cost data from experts, GAO estimated the 2009 present value cost of on-site storage of 153,000 metric tons at the end of 100 years to range from $13 billion to $34 billion but increasing to between $20 billion to $97 billion with final geologic disposal.
In 2000, an unprecedented number of delays and cancellations in commercial airline flights occurred. At 31 of the nation’s busiest airports, 28 percent of the domestic flights arrived late. Certain flights were almost always late; for example, in December 2000, 146 regularly scheduled flights were late 80 percent or more of the time. The percentage of delayed flights declined to 24 percent in the first 6 months of 2001. According to FAA and others, the decline likely reflected various factors, such as better weather, fewer flying passengers because of the economic slowdown, a strike that idled one carrier’s aircraft for several months, a reduced demand on the system, and actions taken to better manage the nation’s airways. The September 11 terrorist hijacking of four commercial airliners has further contributed to a drop in air passengers and scheduled flights, with major airlines cutting the number of flights by 20 percent or more and one carrier, Midway Airlines, ceasing operations entirely. Although recent events may have moved airport congestion off center stage as a major national issue, delays remain a pervasive problem, in part because of the interdependence of the nation’s airports. The effect of delays can quickly spread beyond those airports where delays tend to occur most often, such as New York’s La Guardia, Chicago O’Hare, Newark, and Atlanta Hartsfield. Delays at such airports, particularly those with large numbers of flights, can quickly create a “ripple” effect of delays that affect many airports across the country. For example, flights scheduled to take off for such airports may find themselves being held at the departing airport because there is no airspace to accommodate the flight. Similarly, an aircraft late in leaving the airport where delays are occurring may be late in arriving at its destination, thus delaying the departure time for the aircraft’s next flight. The September 11 attacks may also have added a new dimension to delays because the more thorough screening of airline passengers at ticket counters and security check points now takes additional time. So far, FAA and airlines have addressed this issue by telling passengers to arrive earlier for their flights and to be prepared for longer processing times. Whether additional security will affect the timeliness of aircraft flights has yet to be determined. Delays have many causes, but weather is the most prevalent. Figures compiled by FAA indicate that weather causes about 70 percent of the delays each year. Apart from weather, the next main cause is lack of capacity—that is, the inability of the air transport system to handle the amount of traffic seeking to use it. Capacity can be measured in a variety of ways. At individual airports, one measure is the maximum number of safe takeoffs and landings that can be conducted in a given period, such as 15 minutes or 1 hour. FAA has established such a capacity benchmark at each of 31 of the nation’s busiest airports. FAA’s data on capacity and demand at these airports show that even in optimum weather conditions, 16 airports have at least three 15-minute periods each day when demand exceeds capacity. Weather and capacity problems are often linked, because bad weather can further erode capacity. For example, some airports have parallel runways that are too close together for simultaneous operations in bad weather. When weather worsens, only one of the two runways can be used at any given time, thereby reducing the number of aircraft that can take off and land. FAA’s data show that in bad weather, 22 of the 31 airports have at least three 15-minute periods when demand exceeds capacity. Another measure of capacity, apart from the capacity of individual airports, is the number of aircraft that can be in a given portion of commercial airspace. For safe operations, aircraft must maintain certain distances from each other and remain within authorized airspace. If too many aircraft are trying to use the same airspace, some must wait, either on the ground or en route. FAA’s most recent long-term growth projections, which date from before the September 11 terrorist hijackings, anticipated considerable growth in demand for air travel. FAA projected that the number of passengers served by U.S. airlines would rise by more than 40 percent, to more than 1 billion annually by 2010. What effect, if any, the terrorist hijackings will have on long-term growth still remains to be seen. To accommodate the increased number of passengers it was projecting, FAA expected airlines to increase the size of the total fleet by about 2,600 jets—an increase of about 50 percent. The fastest-growing segment of the fleet is expected to be smaller aircraft called regional jets—that is, jets with 32 to 70 seats but generally with ranges of 1,000 miles or more. As we have pointed out in previous work, the growing use of regional jets in addition to turbojet aircraft currently flying has already added to congestion and delays, according to published studies and experts, but the precise amount has not been determined and likely varies from airport to airport. Besides airlines, other parts of the aviation community are also likely to place increasing demands on the air traffic system. FAA expected increases of about 50 percent in the number of cargo aircraft and the number of smaller general aviation jets, such as corporate jets and jets operated by air taxi or charter services. Some industry analysts have suggested that in the wake of the terrorist hijackings, corporations may make increasing use of such jets, which often use the same airports as those used by airlines. All three groups that are most heavily involved in addressing delay-related problems—federal agencies, airlines, and airports—have important roles. As the agency in charge of the air traffic control system, FAA has the lead role in developing technological and other solutions to airspace issues. FAA and DOT are also an important source of funding. Through the Airport Improvement Program, FAA provided $1.95 billion in grants to airports in fiscal year 2000, and through its Facilities and Equipment appropriation, it pays for such things as improvements to the air traffic control system. FAA and the Office of the Secretary of Transportation (OST) monitor access rights to airports as well as the landing fees that airports can charge. FAA also grants authority for airports to levy passenger facility charges (PFC), which were a source of more than $1.5 billion in revenue for airports in calendar year 2000. Airlines and airports are also important decisionmakers and funding sources. For example, changes in air traffic control technology may require airlines to make substantial investments in new technology for their aircraft. However, the recently enacted $15 billion federal assistance package for the airline industry illustrates the airlines’ dire financial conditions, particularly after the events of September 11. Accordingly, airlines may have a difficult time making investments in air traffic control technology for their aircraft. Similarly, while infrastructure improvements such as new runways often receive federal support, much of the funding is raised at the local level. Government, airlines, and airports have undertaken a wide range of initiatives to address flight delays and increase the capacity of the air transport system. The stakeholders we contacted—DOT and FAA, 8 airlines, and 18 of the most delay-prone airports—identified 158 separate initiatives under way. Earlier this year, FAA issued the Operational Evolution Plan (OEP), which is designed to give more focus to some of the diverse initiatives under way. FAA’s role, in addition to continuing to spearhead the initiatives for which it is directly responsible, is to act as overall coordinator for implementing the OEP. FAA believes that the OEP actions already implemented have contributed to the reduction in flight delays experienced in the first 6 months of 2001. Challenges still lie ahead in other areas, such as introducing new technology, adding new runways, funding billions of dollars of investment, and developing ways to help measure what the efforts are accomplishing. The initiatives cited by DOT and FAA, airlines, and airports include steps for addressing both weather-related and capacity-related delays. Considerable efforts were under way to address weather-related problems, which is the major cause of delays. For example, to deal with the problems arising from thunderstorms and other severe weather in the spring and summer, FAA launched a program called Spring/Summer. Among other things, this program led to daily telephone conference calls between FAA and airline officials to discuss weather and other conditions that might affect the smooth flow of air traffic. The program also led to a new effort to predict severe weather affecting aircraft. Examples of initiatives directly related to capacity included an individual airport’s plans to build new runways, taxiways, or gates; airlines’ efforts to adjust schedules to relieve congestion at some hubs; and FAA’s efforts to seek greater use of Canadian and military airspace. Some initiatives dealt with both weather and capacity. For example, FAA is testing a system that would allow the use of satellite navigation for landing approaches in all types of weather conditions. This system, if successful, will allow airports to operate at higher capacity in bad weather. To an extent, the initiatives begun by each of the three stakeholder groups have different emphases. FAA and DOT initiatives emphasize improving the ability to handle more aircraft in the air, airline initiatives emphasize making adjustments to airline operations, and airport initiatives emphasize increasing the capacity for more takeoffs and landings through more runways and other infrastructure. The initiatives that stakeholders cited are summarized briefly below; appendix II contains a detailed list of the initiatives and their status. DOT and FAA officials identified 29 initiatives under way at their agencies. These initiatives can be grouped into three categories—adding capacity to the system, identifying specific problems contributing to delays, and identifying ways to better manage and coordinate responses to delays. Table 1 provides examples of each category. Some of these initiatives were completed, such as a benchmarking study to provide a better indication of the number of takeoffs and landings that can be supported at 31 of the busiest airports in the national airport system. However, most of the initiatives were ongoing or long-term projects. Some, such as reevaluating what is being done to deal with severe spring and summer weather, will be done annually or as needed. Longer term efforts include redesigning the airspace surrounding major metropolitan areas and developing technology that allows greater use of existing runways in low-visibility conditions. Initiatives identified by the eight airlines generally fell into one of three categories—scheduling, weather and dispatch, and testing of new technology. (See table 2 for examples.) In some cases, these initiatives were tied to those of other stakeholders. For example, the main technology-testing initiatives involved airline participation in the government initiatives previously discussed. Most of these initiatives were ongoing or long-term projects. The 18 delay-prone airports we contacted identified a wide range of initiatives that varied from airport to airport, reflecting such differences as the relative amount of congestion and the airport’s ability to add infrastructure. Although each airport had a different set of concerns regarding delays, the initiatives generally fell into three areas: new runways and taxiways, terminals and gate space, and new technology to promote efficient use of the airport. (See table 3 for examples.) As with initiatives for the two other stakeholder groups, most of these projects were still in process when we completed our review. FAA designed the OEP to provide a more focused and more coordinated approach to congestion and delay problems. The previously described initiatives were generally begun independently rather than as a collaborative response to a systemwide problem. Although FAA previously had made efforts to develop more coordination and cooperation among the stakeholder groups, the OEP was FAA’s attempt to align its activities with those of other stakeholder groups using such approaches as collaborative decisionmaking, specific timelines for completing actions, and designation of accountability. The OEP does not replace or eliminate the previously described initiatives; rather, it incorporates many of them into “operational solutions” designed to address specific goals. Responsibility for the various actions is still shared among the various segments of the aviation community. As the overall coordinator for this effort, FAA faces challenges in ensuring a consistent funding stream for the federal government’s portion of the activities and developing performance measures that will help gauge the extent to which these operational solutions are reducing delays. The OEP focuses on four goals, each with a set of operational solutions. The four goals and the types of operational solutions included for each goal are as follows: Increasing arrival and departure rates. Increasing the number of flight arrivals and departures during a given period is an effort to keep pace with demand at many key airports. Fifteen of the nation’s busiest airports suffer from insufficient capacity to meet peak demands, according to FAA. The plan proposes seven solutions to increase the arrival and departure rate, including building new runways and coordinating efficient surface movement. Increasing flexibility in the en route environment. This goal is aimed at easing congestion in the air and providing more operating flexibility for pilots. En route congestion occurs, according to FAA, because routes are tied to ground-based navigational aids, controller workloads are limited by manual monitoring of aircraft, and current aircraft separation standards do not account for advances in aircraft capability. The plan proposes eight solutions, including reducing aircraft separation; working collaboratively with users to manage congestion; and providing access to additional airspace, such as military operating areas. Increasing flexibility en route during severe weather. Thunderstorm activity—especially around busy airports—can cause problems for aircraft that are en route. The inability to predict the precise location, movement, and severity of hazardous weather can hamper air traffic managers and pilots alike. Improved equipment and procedures could better pinpoint weather characteristics and their impacts and lead to improved flight management and ultimately fewer delays. The plan proposes solutions to provide better hazardous weather data and to respond effectively to hazardous weather. Maintaining airport arrival and departure rates in all weather conditions. A significant portion of delays occur when local airport weather reduces arrival and departure rates. The plan calls for maintaining a constant rate of aircraft arrivals and departures, regardless of weather conditions. To meet this goal, the plan proposes such solutions as reconfiguring runways, developing ways to safely space aircraft closer together, and maintaining runway use in reduced visibility. The OEP’s operational solutions incorporate most of the separate initiatives identified by the stakeholder groups. FAA officials emphasized that the OEP is subject to change, including revisions as a result of the September 11 terrorist activities. The OEP’s operational solutions do not include all types of actions that have been advanced as possible solutions to the delay problem. FAA acknowledged that the OEP was not meant to be an end-all that would solve all delay problems, but was instead a more limited document dealing with near-term operational solutions. The solutions included in the OEP have widespread support across stakeholder groups and do not include any initiatives for which FAA could not obtain consensus from key aviation stakeholders. In addition, FAA specifically limited the types of measures included in the OEP to those that (1) will add new capacity and (2) can be implemented within 10 years. For example, the OEP’s operational solutions include new runways that airports like Seattle-Tacoma and Lambert-St. Louis currently expect to complete by 2010. The OEP does not include all measures that have been advanced as possible solutions to the delay problem, such as new airports or high-speed ground transportation alternatives. The OEP also does not include administrative, regulatory, or market-based approaches that are largely for the purpose of managing existing capacity more efficiently, such as setting limits on the number of flights that could be flown to and from specific airports. FAA has made a good start in developing the OEP and in taking the initial efforts to implement it. FAA followed a highly collaborative process in developing the plan. It encouraged input from stakeholders in a variety of ways, circulated drafts to various segments of the industry for comment, and revised those drafts to reflect the comments received. The final plan, issued in June 2001, establishes timelines for individual components of the plan and includes actions and decisions required by the different stakeholders. Lines of accountability have also been established within FAA. For example, a team of senior FAA personnel, chaired by the Acting Deputy Administrator, is to lead the implementation and be responsible for setting priorities, monitoring benefits and methods for measuring improvements, and engaging the aviation community leaders in key decisions. FAA officials believed that actions under way were already having an effect on reducing delays. During the first 6 months of 2001, 24 percent of major airlines’ flights arrived 15 minutes or more after their scheduled arrival at 31 of the nation’s busiest airports, compared with 28 percent during the first 6 months of 2000. FAA officials believe that a combination of factors is responsible for this drop, including much more favorable weather conditions during the spring of 2001. They also cited the Spring/Summer initiative, which addresses weather issues resulting from spring and summer storms, as an example of a collaborative effort among airlines and various FAA organizations that helped reduce the amount of delays. Another effort they cited was the choke-points initiative, under which FAA made aircraft routing changes, added technology, changed procedures, and modified traffic management strategies to reduce the impact of congestion in seven highly congested areas in the national airspace system. Many of the actions included in the OEP, including those that will add the most capacity, are still under way. Security and other concerns raised in light of the September terrorist attacks may have some effect on the initiatives. For example, initiatives allowing pilots greater flexibility in determining their route of flight or to use restricted military airspace will be affected by increased security concerns. Apart from concerns raised over the terrorist attacks, FAA and other stakeholders face the following challenges on several key fronts in implementing the actions in the OEP: Introducing new technology. A number of the OEP’s efforts center on introducing new technology to allow aircraft to take off, travel, and land more closely together. For example, FAA is testing a satellite navigation system that would allow for instrument landings in all weather conditions. Our past reviews have shown that over the past two decades, FAA has encountered numerous problems in introducing new technologies, with many projects running years behind schedule. Because of the size, complexity, cost, and problem-plagued past of FAA’s modernization programs, we have designated these programs as a high-risk information technology investment since 1995. The continued risks are sizable, in part because many technology-related projects under the OEP are still a number of years from being fully developed and will need to be integrated with existing technology. For example, we recently reported that FAA will face a technical challenge in ensuring that the components of its Free Flight initiative can work with other air traffic control systems. Overcoming barriers to building new runways. FAA estimates that 50 to 55 percent of total capacity to be added under the OEP will come from runway projects at 15 of the nation’s 31 busiest airports, such as Detroit, Minneapolis, St. Louis, and Atlanta. Six of these runways are currently under construction; the rest are in some stage of the planning, design, and review process. The process of planning and building a runway typically takes 10 years under the best of circumstances, and some of the projects still face legal challenges from local groups opposed to the projects because of environmental and other concerns. Obtaining sufficient funding. Successful implementation of actions included in the OEP hinges on the availability of funding from several sources, including FAA, airlines, and airports. The full cost of the OEP is unknown. FAA estimates that over the period of 2001 to 2010, its portion of the cost will be about $88.5 billion—$11.5 billion in federal funding for facilities and equipment, and $77 billion in operations to deliver services. To help make this funding available, FAA officials told us they were adjusting priorities and developing future budget requests around the plan. Other significant funding will need to come from airlines and airports. For example, before benefits of new air traffic control technology can be fully realized, aircraft must receive new equipment. As the recent economic slowdown and the terrorist attacks have shown, the airline industry is subject to periods of profit and loss. If new equipment comes on-line at a time when airlines think they cannot afford to buy it, the planned benefit may not materialize. Similarly, infrastructure projects at airports usually require a substantial amount of local funding. Adding a runway at a major metropolitan airport, for example, could cost $1 billion or more, only part of which is federally funded. In the wake of the terrorist attacks, some airports have already begun to reevaluate expansion plans and capital expenditures, reportedly in response to concerns about increased expenditures for security and declining airline and passenger fees to pay for improvements. Establishing accountability through performance indicators. The OEP recognizes that, along with designating who is to be responsible for each action, performance indicators are needed to assess what the action is accomplishing. For example, under the Free Flight initiative, FAA has established direct routings as one performance indicator and set a goal of increasing these routings by 15 percent in the first year of implementation. At this early stage of the OEP, FAA is still in the process of developing most performance indicators. Having sound performance indicators is of particular importance if funding becomes limited, because these indicators can help determine which actions are likely to yield the best results for the dollars expended and where to redirect resources should doing so become necessary. If fully implemented, the actions to be taken under the OEP will add substantially to the system’s capacity but are unlikely to keep delays from rising again unless air traffic remains at substantially lower levels than anticipated over the long term. If the industry rebounds to the point that FAA’s earlier projections about air traffic growth turn out to be correct, many of the busiest airports will be unable to keep pace with rising demand, even with their increased capacity. If the recovery is less robust, the system still will have difficulty because a number of delay-prone airports have limited ability to expand their capacity to meet even modest increases in demand. Many of the most delay-prone airports have already run out of room for adding other runways or will soon run out of room to do so. These delay-prone airports cause delays that ripple throughout the system. If problems at these airports are not alleviated, this ripple effect will continue, causing delays at airports that may have addressed their own capacity problems. Finally, competitive pressures within the airline industry may still lead airlines to continue using operations strategies that are vulnerable to delays. These pressures currently motivate airlines to schedule flights that fully use available air transport system capacity during those times of day in which they perceive consumers most want to fly. At delay-prone Newark International Airport, for example, after one airline recently decided to reduce schedule delays by trimming the number of peak-hour flights, rival airlines quickly responded by adding more peak- hour flights of their own. Even if all OEP actions are successfully completed, key airports in the system will likely lose ground in their ability to meet demand. Under the growth projections made before the terrorist hijackings, FAA forecasted that between 2001 and 2010, demand would increase faster than capacity at 20 of the nation’s 31 busiest airports. For these airports, the ability to make significant headway in adding capacity is primarily related to one factor—adding a runway. FAA estimates that the 14 airports adding a runway by 2010 will see capacity increases averaging 34.9 percent. By contrast, the 16 airports not adding a runway will see a capacity increase averaging 6.3 percent. FAA expects that at least half of the capacity gain from OEP initiatives will come from the new runways included in the plan. Some industry sources have suggested that even more runways should be built by 2010, saying that 50 miles of new runways at the top 25 delay-prone airports—the equivalent of 1 runway at each airport—would solve the system’s capacity problems. Airport stakeholder groups are calling for streamlining the procedures and reducing the time necessary for approving runways, which now takes at least 10 years to plan and complete. Proposed legislation has been introduced in the Congress to help shorten this process. Relying on adding runways to increase capacity at busy metropolitan airports, however, will likely have a limited effect over the long term. Some airports can accommodate additional runways, but many cannot. Denver International Airport is an example of a location with substantial expansion potential. Located in a sparsely populated area away from the metropolitan area, the airport has ample room to add capacity. The airport is currently building a new 16,000-foot runway to add to its five existing runways and can accommodate six more runways in its present configuration. By contrast, other airports, such as Los Angeles, Washington Reagan National, La Guardia, and San Francisco have little capacity to expand and would find it difficult to build even one more runway, either because they lack the space or because they would face intense opposition from adjacent communities. For this reason, many airports will likely face delay problems even if demand turns out to be much lower than FAA projected. Of particular concern are key delay-prone airports—that is, those airports that experience the highest number of delays per 1,000 flight operations (takeoffs and landings). The seven airports that experienced the highest rate of delays in 2000 are shown in table 4. Among these, Chicago O’Hare indicates that it can add another runway, although it too faces intense opposition if it attempts to do so. FAA’s April 2001 Benchmarking Study concluded that of these seven airports, all but Boston Logan would still have significant passenger delays in 2010, largely because the gains in capacity during this decade will be relatively low. For example, according to FAA projections, the three New York airports—La Guardia, Newark, and Kennedy—will experience relatively small capacity gains during this decade—just 7 percent for Newark and 3 percent each for the other two airports. Even for airports where a runway addition is possible, other factors make that alternative less desirable. Cost is one such factor. Some airports are surrounded by development that is extremely difficult and expensive to displace. For example, a new 9,000-foot runway currently under construction at St. Louis-Lambert Field will cost an estimated $1.1 billion, in large part due to the required displacement of over 2,000 homes, businesses, churches, and schools around the airport. Similarly, a new 9,000-foot runway under way at Atlanta Hartsfield will cost an estimated $1.3 billion, again largely due to the costs of relocating structures and highways. By contrast, the new 16,000-foot runway at Denver—where ample open land is available—will cost just $171 million. Another factor is the expansion potential over the longer term. Even if many airports like Atlanta Hartsfield, Chicago O’Hare, and St. Louis- Lambert Field are able to add another runway or reconfigure existing ones, continued growth in air traffic would mean that the airports would need to expand once again. At some point, these locations will have to consider other alternatives because the cost of adding another runway will be too expensive and environmentally unacceptable. For those locations where capacity is constrained and options to add runways are limited or nonexistent, that time has already come. Because the airports in the national system are so interdependent, continued shortfalls in capacity at key airports over the long term will likely perpetuate the delay problem throughout the entire system. The system’s interdependency comes from the hub-and-spoke routing pattern under which most airlines operate. Under this pattern, airlines schedule many flights to arrive at one airport (the hub) from other cities on their network (the spokes) during a short period of time. While the aircraft are on the ground, passengers transported to the hub connect to flights going to their final destination. These groups of arrivals and departures happen several times a day. This approach allows an airline to serve more cities than it could through a “point-to-point” approach that does not use a hub as a transfer point. The interdependency inherent in this hub-and-spoke approach sets up a ripple effect in which delays at a hub can quickly affect not only flights to and from that airport, but also flights throughout the entire network. This ripple effect is illustrated by a scenario that is based on actual operations reported by FAA’s research and development center. In the scenario, a demand/capacity imbalance at Newark International Airport resulted in a backup of five aircraft trying to land at the airport. These aircraft had to be kept in holding patterns above the airport until they could land. Because of the backup, FAA’s New York en route center (which controls air traffic going in and out of Newark and other area airports) notified the adjoining Cleveland en route center that it could not accept more aircraft bound for Newark until the aircraft in holding patterns around Newark were able to land. As flights began to back up, many aircraft were affected, whether or not they had Newark as their specific destination, because they were also seeking to use part of the backed-up airspace. Within 20 minutes, the delay in landing these 5 planes at Newark affected as many as 250 flights, some as far away as the West Coast. Thus, continued difficulties at some hubs can have repercussions at airports that have successfully addressed their own local capacity problems. Phoenix Sky Harbor International Airport offers a good example. In 2000, Phoenix put an additional runway into service, and the airport now has sufficient capacity to allow flights to take off on time. However, the airport ranks among the top 15 in the United States for flight delays. According to airport officials, most of the delays at Phoenix are the result of delays and cancellations at other airports—circumstances unrelated to the capacity at Phoenix. Competition in the airline industry is another factor that may limit the effect that new capacity will have on reducing delays. Competition may have such an impact because it encourages airlines to take maximum advantage of capacity during the times that offer the greatest advantage. Capacity at an airport is relatively constant throughout the day because the airport theoretically can handle the same number of takeoffs and landings each hour. However, airlines are generally motivated not to stretch out their schedules throughout the day, but rather to concentrate their operations in certain peak periods. One reason airlines follow this practice is that they establish schedules that try to maximize what they perceive consumers want, such as flights that leave early and late in the business day. Another reason airlines follow this practice is that in order to conduct efficient hub-and-spoke operations, they try to schedule as many flights as possible to arrive at the hub airport at about the same time and then to depart at about the same time a short while later. By doing so, they minimize the amount of time that transferring passengers have to spend waiting for their connecting flights. There are ample illustrations of the ways in which these competitive pressures lead airlines to make decisions that can potentially worsen delay problems, rather than reduce them. For example: When the opportunity came to submit applications for new flights operating in and out of La Guardia Airport, an airport that has had delay problems for years, airlines submitted proposals to add more than 600 flights. Airline officials said they did so because of consumer demand for service to and from New York. To help reduce delays at Newark International Airport, Continental Airlines began using larger aircraft on some routes, allowing the airline to reduce the number of scheduled flights. However, several other airlines soon filled the vacated slots with flights of their own. As Continental Airlines did in Newark, United Airlines began using larger aircraft and scheduling fewer flights to help address persistent delays in San Francisco. Here, too, other airlines soon filled the vacated slots. Airlines make their decisions after considering many factors, so examples such as these cannot be taken as clear signals of what they will choose to do in the future, especially during the current slowdown in passenger demand. However, one scenario that must be considered is that these competitive pressures will quickly fill any openings that are considered to be economically advantageous. In this sense, the added capacity may mirror what transportation engineers and the traveling public have often noted about adding new highways in congested areas—that is, the additional capacity quickly induces more people to drive, thereby leaving traffic conditions little better than they were before. Because OEP actions will likely not be sufficient on their own to resolve the delay problem over the long term, aviation stakeholders and policymakers will likely have to consider additional measures to enhance capacity and alleviate delays. A range of other measures is available, such as building new airports or developing alternative ground transportation systems. These measures are not new, but they have received rather limited attention relative to incremental steps that are being taken, largely because they require more extensive change that could conflict with the interests of one or more key stakeholder groups, such as airlines or local communities. Some of these measures, such as transportation alternatives like high-speed rail, may have become more viable in light of security and other considerations stemming from the recent terrorist hijackings. With the rising need for considering these measures, the Congress and DOT will need to assume a central role in identifying which measures are most appropriate for given situations, framing the discussion about them, and moving forward with the best solutions. Other measures—not now part of the OEP—exist as potential solutions to alleviate delays. These measures, which have been cited by various researchers and policy organizations over the last decade, basically fall into three categories. The first category involves various other measures for adding airport infrastructure besides adding runways to existing airports, such as building new airports or using nearby underdeveloped regional airports. The second category involves approaches to better manage and distribute air traffic demand within the system’s existing capacity. These include administrative and regulatory actions, such as limiting the number of takeoffs and landings during peak traffic periods or restricting the types of aircraft allowed to land, and market-based approaches, such as charging aircraft higher fees to land at peak times than at slack times. The third category includes developing alternative modes of intercity travel other than air transportation, such as high-speed rail. Table 5 provides a brief explanation of each of these measures, and appendix III contains more detailed information on each measure. The applicability of any particular measure is likely to vary by location, considering the circumstances at each major airport. There is no “one-size- fits-all” solution; rather, substantially reducing delays will probably require a combination of measures spread out over time. For example, the airspace surrounding the greater New York metropolitan area is perhaps the most congested airspace in the nation. The three major airports in the area (La Guardia, Newark, and Kennedy), which currently are among the nation’s most delay-prone airports, are expected to experience substantial air traffic growth during this decade. But these airports have very limited expansion potential, largely because they cannot realistically build new runways. Building new airports or developing regional airports to serve the area may be long-term solutions, but they will likely take many years to materialize. In the meantime, other short-term measures would need to be considered as passenger demand increases, such as ways to use existing facilities more efficiently. This is the direction that FAA and the New York/New Jersey Port Authority, which owns and operates the three area airports, were moving before the drop in passenger demand following the events of September 11. FAA and the Port Authority had been considering market-based and administrative approaches for La Guardia but have temporarily suspended deliberations on this issue. Because major airports in other locations may face different circumstances than the New York airports face, they may need an entirely different set of solutions to address flight delays. While these other measures may hold promise for addressing capacity problems, adopting any of them is likely to be a more daunting challenge than implementing initiatives in the OEP. Accomplishing the OEP’s initiatives will not be easy, but the opportunity for success is enhanced because FAA has the support of major aviation stakeholders on nearly all of the initiatives. By contrast, gaining consensus on any of these other measures will be much more difficult because they change the nature of the system to the degree that each one could adversely affect the interests of one or more key aviation stakeholder groups—including passengers; air carriers; and aircraft operators, airports, and local communities. For example: Large infrastructure projects, such as new airports that are located in metropolitan areas, could create major controversy. Such projects are often opposed by adjacent communities that are fearful of noise, displacement, or other environmental concerns. Also, finding suitable sites for such projects in crowded metropolitan areas—with enough land that is compatible with other potential land uses—may be difficult. Airlines may oppose some types of infrastructure projects if they fear that the projects would adversely affect them. For example, an airline with a dominant market position at a major hub airport may oppose building an additional airport nearby because the dominant carrier may view it as an opportunity for their competitors to enter the market in that area. Administrative, regulatory, and other measures for managing the demand for existing capacity could generate opposition from various sources as well. Airlines may oppose such measures if they perceive that these measures would restrict their choices in determining rates, schedules, and aircraft sizes—all of which could affect their profits and competitive status relative to other airlines. Smaller communities may also oppose such measures, fearing that commercial air service to and from their airports may be reduced or curtailed because airlines would react by choosing more profitable routes for the limited number of airport slots available. Cost, a factor to be weighed in adding runways to existing airports, is also an important consideration when building a new airport. For example, the last major new airport—the Denver International Airport completed in 1995—cost almost $5 billion to build. This cost would have been greater had the airport been located closer to the city, but since it was located on open land away from established communities, the costs of noise mitigation and other land-use issues were minimized. Also, the construction of fast-rail service in populated metropolitan corridors is likely to be costly. For example, Amtrak estimates the cost to construct fast-rail service in federally designated, high-speed corridors and the Northeast Corridor of the United States will be about $50 billion to $70 billion. Although these measures for the most part have not received widespread consideration, some have come into play in limited situations. Where this has been the case, the wide disagreement among stakeholders regarding the best course of action illustrates the extent of controversy that can be present in weighing the various measures. Here are several examples: In Chicago, where additional airport capacity has been under consideration for years, an intense debate has ensued regarding whether to build a new airport south of Chicago or add runways to O’Hare, which is located in an area of dense development. The city, which owns and operates O’Hare, recently unveiled a $6.3 billion plan that includes adding and relocating runways. The two dominant airlines at O’Hare—United and American—and several congressional members favor this plan. Illinois, several communities adjacent to O’Hare, and other congressional members opposed the additional runways at O’Hare due to environmental and land-use concerns and instead favored building a new airport to be built at Peotone, Illinois, located about 35 miles southwest of downtown Chicago. Atlanta is planning a $5.5 billion upgrade to Hartsfield International Airport, including adding a fifth runway at a cost of about $1.3 billion. The airport is constrained by adjacent highways and development, making modifications expensive. At a recent national meeting of airport executives, Atlanta’s Aviation General Manager for Hartsfield Airport was asked why a new airport located north of the city—on a large tract of land outside of Atlanta that is already owned by the city—was not considered more seriously as an alternative to the expansion project. He cited the unlikely financial backing of the airport’s dominant carrier—Delta Airlines—as the major barrier to considering an option other than adding capacity at Hartsfield. In Los Angeles, the master plan for the Los Angeles International Airport calls for (1) reconfiguring and extending its runways and adding taxiways to increase capacity and (2) shifting a larger percentage of the area’s air traffic to surrounding regional airports, such as Orange County’s John Wayne Airport, Ontario, and Burbank-Glendale. The city also proposes high-speed rail service from Los Angeles International to facilitate the use of surrounding airports. Local officials and several Members of Congress favor no expansion at Los Angeles International and shifting even more flights to the outlying airports. At the same time, the outlying airports must overcome existing limitations. For example, the terminal at Burbank- Glendale does not meet FAA standards (too close to the runway) and needs to be replaced, but city officials in Burbank have indicated they will oppose a new terminal. The Ontario Airport is limited by the state to 125,000 operations annually. Also, significant interest has been shown in using the former Marine Corps Air Station at El Toro, but its use has been opposed by local factions because of noise and other concerns; FAA and others also have concerns about the runway configuration there because of mountainous terrain around the airport. Lambert Field in St. Louis is undertaking a major runway project, which— at $1.1 billion—is one of the most costly runway projects of any currently under way nationwide. Mid-America Airport—which the federal government has spent about $216 million to develop over the last decade—is located about 24 miles from St. Louis, has modest but new terminal facilities, and has two runways (8,000 feet and 10,000 feet) capable of accommodating the largest aircraft in operation today. The only airline serving the airport in 2001 discontinued service at Mid-America in early December 2001. American Airlines, which has a major hub in St. Louis, supports the runway expansion project at Lambert, rather than using the facilities at Mid-America. Although consideration of these other measures is likely to be controversial, developments stemming from the September terrorist attacks may make some of them more viable. For example, a shift in public opinion in favor of ground transportation for relatively short trips (150 to 300 miles) may make high-speed rail a more viable option for some high-density corridors, despite the cost and the dislocation it would bring for communities where new, better rail lines would need to be built. Similarly, the need for greater security controls on air traffic flying in sensitive locations, such as Washington, D.C., and New York City, may increase support for some administrative solutions, such as limiting the extent to which corporate jets and other general aviation aircraft can use airports that are already crowded because of commercial airline flights. In 2000, smaller general aviation aircraft and unscheduled air taxi service accounted for about 44 percent of the air traffic at Washington Reagan National Airport and about one-third of all traffic at La Guardia. If satisfactory progress in addressing airline delays could be made through the initiatives in the OEP, the existing federal effort, spearheaded largely by FAA, might be sufficient. However, needed solutions, both short and long term, appear likely to include measures not included in the OEP. Because these measures are more controversial and include modes of transportation other than aviation, the federal government—particularly DOT—will need to take an expanded role. DOT has recognized the need for more long-range strategic planning on air transport system issues and has efforts under way to address this need. For the most part, these efforts are currently on hold in the aftermath of the September 11 terrorist attacks because FAA has focused its immediate efforts on other matters. One effort that continues, however, began in mid- 2001 when DOT’s Deputy Secretary convened a working group— comprised of senior officials within the Department—to address aviation congestion, delays, and competition issues. Specific goals, responsibilities, and the scope of the working group were still being developed. On August 21, 2001, FAA and OST began another effort when they published in the Federal Register a request for comments on market-based solutions for relieving flight congestion and delays. This request is part of a DOT effort to collect data and conduct an analysis of market-based pricing at airports. The request asked respondents to set aside consideration of the current legal framework in suggesting ways that demand management may be used as one component of a delay-reducing strategy. The comment period for this notice was to have closed on November 19, 2001. However, given the decline in air traffic after September 11, DOT has suspended the closing date for comments. Once DOT has a better understanding of the long-term impact of the events of September 11, they will publish a new closing date for comments. Although actions like these are positive steps toward alleviating airport congestion and flight delays, what is still missing is a long-term plan or blueprint to guide the development of the entire national air transport system. Various researchers and policy organizations have suggested the need for such a plan and have recommended that it involve several critical steps, including the following: A thorough assessment of all potential measures and their applicability to the various circumstances and needs of each region. The advantages and disadvantages of each measure and the barriers to implementation would be clearly delineated. Close collaboration among airlines, airports, and other key stakeholder groups. Legislative, regulatory, and administrative actions needed to implement the plan. An innovative investment strategy, including federal incentives and leverage needed to encourage the use of recommended measures. Choosing many of the measures is the prerogative of local governments, airports, and airlines, but the federal government can influence the stakeholders’ decisionmaking using a variety of financial, administrative, and regulatory means. For example, although average aircraft size is determined by individual airlines, the government can help shape these decisions by allowing changes in landing fees and airport restrictions at selected locations to encourage the use of larger aircraft at crowded airports or encourage smaller aircraft to use nearby airports that have excess capacity. Similarly, the federal government can provide additional funding for targeted options, such as enhancing reliever airports, or make financing of airport infrastructure contingent on stakeholders’ support of other options deemed beneficial. To date, few of these elements have been included in DOT’s planning efforts. Except for the effort to study market-based solutions for relieving delays, DOT at this time does not have plans to perform detailed analyses of other potential solutions, such as new airports and alternative ground transportation, in the context of a strategy for increasing national airspace capacity. Such analyses are a critical prerequisite to developing a blueprint for guiding the development of the air transport system, according to others who have studied this area. Also, the direction and planned outcome of DOT’s strategic planning efforts are unclear. DOT has not decided, for example, whether—as part of its strategic planning—to develop a blueprint of potential measures that are needed to address the capacity needs in specific locations (e.g., a set of measures for addressing problems in the crowded Northeast or long-range alternatives in locations where incremental additions to existing airports are growing more limited). FAA’s Operational Evolution Plan is a positive step in addressing needed capacity-enhancing actions. But if the recent economic slump and the challenges posed by the September 11 terrorist attacks turn out to be only a temporary pause in the growth of air traffic, the plan will fall far short of meeting the system’s growing needs. Unless passenger traffic remains at the current reduced levels over the long term, which seems unlikely, bolder more controversial measures—such as new airports and administrative and market-based approaches—will have to be considered. Exploring such measures is important because many of the nation’s key airports cannot significantly add to their capacity. Eventually, even airports that either currently have enough capacity or can perhaps add a runway to increase capacity will have to consider other measures such as these. While the nation’s attention is now justifiably focused on many other issues of aviation safety and security, now is also a good time to begin laying the groundwork for considering these additional delay-reducing measures. The current drop in air traffic represents an opportunity to develop plans for keeping the air transport system ahead of the curve of potential future growth. A carefully considered blueprint is needed to guide future actions for the next 20 years and beyond. Selecting a set of measures to solve the nation’s flight delay problem involves difficult choices with considerable impact on the interests of the various stakeholder groups—the flying public, airlines, airports, and nearby communities. In addition, because of the interdependence of airports in the system, a national perspective is needed—one that considers the needs of the entire system while also considering the individual needs and circumstances of various locations. For some parts of the country, these unique needs and circumstances may require considering intermodal solutions, such as high-speed rail as an alternative to air travel. DOT and the Congress both have key roles to play in bringing about needed changes to sustain a safe, sound, properly managed, and affordable air transport system. Because of the breadth of its management of all transportation modes, DOT is in a unique position to lead this effort. DOT’s recent efforts are a start toward developing such a strategic planning effort, but additional steps will be needed to provide the kind of necessary blueprint for the future. DOT needs to work closely with the Congress in formulating its approach, because ultimately the Congress may have to make difficult choices that will please some stakeholders and displease others. Now is the time to begin these efforts in earnest. We recommend that the Secretary of Transportation include the following as part of DOT’s current strategic planning for airspace capacity: An evaluation of the capacity-enhancing measures (including the measures we discuss in this report) that are not in the OEP, such as building new airports, managing air traffic demand, and using other modes identified for increasing capacity. The evaluation should be done in the context of the situations or locations where such options would be most applicable considering key airport characteristics, circumstances, and expansion potential. Barriers and potential legislative actions should be delineated for each measure. Collaboration and discussions—similar to the efforts made in formulating the OEP—on prospective measures with airlines, airports, and other key players in the aviation community. A blueprint for effectively addressing capacity issues and reducing delays in the nation’s air transport system. This blueprint, which would be a guide for future development of the system, should focus on both short-term (less than 10 years) and long-term (10 to 40 years) measures needed and address the specific measures applicable for each critical location as a means for achieving a viable national system. Where necessary, this blueprint should also consider addressing aviation delay problems by using other modes of transportation, such as high-speed rail. An innovative investment strategy, which includes an analysis of potential incentives that the federal government can bring to bear to encourage aviation stakeholders to adopt measures identified in the blueprint. Consideration should be given to financial incentives, such as targeting more funds to certain kinds of projects or types of airports, as well as incentives that would involve modification of existing regulatory and administrative requirements, such as allowing changes in the methods of determining landing fees. We provided a draft of this report to DOT and FAA for their review and comment. The two agencies generally concurred with the facts presented in the draft report. They provided some technical clarifications, which we have incorporated into this report where appropriate. Neither agency specifically commented on the draft report’s conclusions and recommendations; for the most part, they did not discuss the additional measures that we recommended for consideration in developing a blueprint for future capacity enhancement. FAA did provide comments on one of the measures—the wayport concept. FAA said a panel of DOT and FAA experts had examined the near-term benefits of the wayport concept in the late 1980s. The panel concluded in 1990 that wayports would provide little or no benefit at the time because new hubs were not needed and airlines would be unwilling to use them. In its response, FAA also noted that airlines jealously guard their transfer functions and have ambitious expansion plans at their current hubs to meet future demand. Because wayports would mainly be transfer points for passengers, FAA said, the absence of originating passengers would lead to relatively low concessions and would mean airports would have to charge higher landing fees and rents to remain fiscally sound. As we indicated in this report, we remain impartial as to which measures are the best ones to adopt in any long-term plan for the air transport system. However, we are concerned that FAA’s response misses a key point: in the long term, a successful strategy requires a careful look at measures other than expanding current hubs. Because so many key airports are severely restricted in their ability to add runways, other options must figure into long-term plans, even if they appear to have little merit in the short term. The panel may or may not have been correct in deciding that wayports were not desirable in 1990, but since then, dramatic changes have occurred in the system, such as rapidly escalating costs for and increasing local opposition to new runway construction at crowded hub airports. In addition, the rapid growth of regional airlines, regional jets, passenger enplanements, and cargo and express mail services have changed the aviation environment. In light of these changes and the conditions and circumstances that are likely to exist in the air transport system in the next 40 years and beyond, we believe all of these measures, including wayports, deserve a fresh look. The judgments and decisions that are eventually rendered about these measures also need to be rooted in an in-depth, data-rich analysis. In this regard, FAA’s current position about wayports appears lacking. For example, FAA has performed no quantitative analyses or conceptual modeling to support its conclusion about the impact of wayports on airport revenues and fees and airline competitiveness. In the years since the DOT/FAA panel examined the wayport concept, three major studies performed by reputable aviation experts outside FAA have concluded that wayports merit further study. Like us, these experts have not endorsed wayports but have called for developing more detailed information to make a sound decision. In the end, developing a meaningful blueprint to enhance capacity for the 21st century will require an expansive vision, a clear understanding of the realities facing the air transport system, and a sound evaluative approach that considers a broad range of possible solutions. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this report. At that time, we will send copies of the report to the Secretary of Transportation; the Administrator, Federal Aviation Administration; and interested Members of Congress. Copies will be made available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-3650. Appendix IV lists key contacts and contributors to this report. We examined efforts made by aviation stakeholders to reduce airline flight delays. Our efforts concentrated on three questions: (1) What initiatives are planned or under way by the federal government, airlines, and airports to address flight delays? (2) What effect are these initiatives likely to have on reducing delays? (3) What other options are available to address delay problems? To determine what initiatives were planned or under way by the Department of Transportation (DOT) and the Federal Aviation Administration (FAA), we primarily spoke with program-level officials. To obtain a preliminary list of efforts, we reviewed congressional hearings, examined FAA and DOT publications, viewed FAA and DOT Web sites, reviewed academic and research studies, and read articles in the aviation press. From the list compiled from these sources, we held teleconferences and discussions with officials directly responsible for the programs leading the efforts. These included representatives from the offices of Free Flight Phase 1; System Capacity; and Communications, Navigation, and Surveillance. We also asked these officials and higher level officials to provide any other initiatives not on our preliminary list. To learn about airline initiatives, we contacted the Air Transport Association and the Regional Airline Association to discuss approaches to reducing flight delays. In addition, we obtained contacts at the airlines from these organizations and held discussions with representatives from American, Atlantic Coast, Atlantic Southeast, Continental, Delta, Federal Express, Northwest, Southwest, United, and US Airways to discuss in- house efforts to address flight delays. To learn about airport initiatives to reduce delays and add capacity, we met with representatives of the Airports Council International - North America and obtained the names and contact information of the council’s members who were responsible for addressing delay issues. On the basis of this information, we held discussions with representatives of airports in Atlanta, Boston, Chicago, Dallas-Ft. Worth, Las Vegas, Los Angeles, Miami, Minneapolis-St. Paul, New York, Newark, Philadelphia, Phoenix, Pittsburgh, San Diego, San Francisco, and Seattle. We also visited Atlanta Hartsfield, Boston Logan, Chicago O’Hare, Dallas-Ft. Worth, Minneapolis- St. Paul, New York Kennedy and La Guardia, and Newark airports. To examine the extent to which the initiatives will likely reduce flight delays, we reviewed congressional hearings, examined FAA statistics on demand and capacity growth, and held discussions with FAA and DOT officials. We also reviewed studies critiquing actions under way and planned as well as forecasts on future airline activity and demand. We obtained FAA data on demand and capacity growth at different airports and followed up with FAA officials to obtain additional insight on their reports and data. We used reports from such organizations as the Transportation Research Board and San Francisco International Airport, and we also used articles in journals that described trends in air traffic demand and how current initiatives impacted those trends. We reviewed congressional hearings at which representatives of federal agencies, airlines, and airports reported how different efforts would affect delays. We also contacted aviation experts affiliated with the Airport Consultants Council, which is an airport industry consulting trade association, to discuss the impact of these initiatives. To learn of other options available to address delays, we went to a large variety of sources. Using information from more than a decade of work that we had conducted on air transportation issues, as well as information we obtained in our work for this particular study, we identified a broad range of studies conducted by various researchers. We also reviewed assessments of these options by FAA, airports, and the DOT Office of the Inspector General. We also discussed these options with FAA officials as well as with various interest groups to discuss the advantages and disadvantages of each option. A presidential directive issued on 12/7/00 directed the Department of Transportation (DOT) and the Federal Aviation Administration (FAA) to (1) study market-based congestion pricing and other demand management solutions to reduce delays and (2) undertake a policy analysis of how these solutions might be implemented, their potential impact, and any statutory impediments. Ongoing. In a June 12, 2001, Federal Register notice, FAA requested comments on demand-management options that could be used to replace the temporary administrative limits on aircraft operations at La Guardia. Comments were due on October 12, 2001; however, FAA has indefinitely suspended this review. Task force on short-term accommodation of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21) slot exemptions at La Guardia Airport Federal government procedures and technology - Federal Aviation Administration FAA air traffic organization FAA, DOT, and the New York/New Jersey Port Authority, working collaboratively, implemented an interim procedure to reallocate (on a lottery basis) schedule slots to airlines at La Guardia. In an August 21, 2001, Federal Register notice, DOT requested comments on using market-based approaches to relieve flight delays and congestion at busy airports. Comments were due on November 19, 2001; however, DOT has indefinitely suspended this review. Completed. FAA has reallocated 159 AIR-21 slots to 13 carriers under an interim plan that became effective on 1/31/01. Ongoing. FAA is developing an implementation plan and conducting a nationwide search for a chief operating officer. An executive order issued on 12/7/00 established a “performance-based organization” within FAA that is designed to increase the efficiency of the air traffic control (ATC) system. FAA and MITRE are developing a 10-year plan to address long-term, system capacity issues and solutions for airports, airlines, and the federal government. This is a long-term initiative to reconfigure NAS airspace routing and use, thereby improving system efficiency. Short-term efforts focus on relieving congestion at critical “choke points” in the Northeast. A team of FAA and Air Transport Association representatives visited 34 air traffic facilities between July 19 and August 6, 1999. Participants evaluated air traffic management throughout these facilities. Ongoing. FAA completed version 3.0 of the operational evolution plan, which was released in June 2001. Ongoing. Completion of the NAS redesign project is expected by the end of fiscal year 2006. To date, seven choke points in the Northeast have been identified by a group of airlines, FAA management, and the National Air Traffic Controllers Association (NATCA). Twenty-one action items were identified to address problems at the choke points, of which 11 have been implemented. All action items should be completed by 7/31/02. Completed. The team identified 165 action items related to individual facilities, FAA’s Command Center, and the NAS. The items were completed by 7/28/00. Objective Beginning in late 1999, FAA began studying ways to reduce delays for spring/summer 2000 and beyond. Action items focused on improving communications between FAA and airlines, using available airspace more efficiently, using new technologies, establishing a strategic planning Web page for FAA’s Command Center, and providing real- time weather information to users. FAA is engaged in a pilot program involving 120 city-pairs to test the feasibility of allowing aircraft to operate at lower, less congested altitudes. Status Ongoing. The procedures to address these action items were implemented in March 2000, and a formal evaluation was completed in December 2000. Actions were taken to improve procedures for spring/summer 2001, with an emphasis on additional training of FAA and aviation users. This evaluation will be conducted annually. Ongoing. Test results have been positive, and FAA has reached an agreement with NATCA on proposed procedural changes. After completing training for controllers and pilots, TAAP was implemented at some facilities following a formal testing period. Ongoing. The collection and dissemination process is in place. FAA has evaluated efforts from the year 2000 and has implemented changes and conducted training for the upcoming convective season. These efforts will be evaluated annually. This effort was undertaken to improve the ability to predict severe weather, ultimately resulting in better aircraft routing. It (1) collects weather information from the National Weather Service, airlines, and 20 central weather service units and (2) develops a collaborative convective forecast product, which is disseminated to FAA and user facilities. This initiative is designed to enhance the use of Canadian airspace by U.S. air carriers through (1) new procedures for the automatic transfer of flight plan data between FAA and NavCanada ATC facilities and (2) an updated structure of overflight fees for airlines using Canadian airspace. This is an effort to improve communications among FAA’s Command Center, its ATC facilities, and airlines to smooth the flow of flights in the NAS. Ongoing. FAA has implemented procedures to ensure that NavCanada ATC facilities have adequate ATC staff before U.S. planes are routed over Canadian airspace. This effort has already helped relieve congestion at the Cleveland and Minneapolis centers. FAA and NavCanada are currently discussing expanded use of Canadian airspace and overflight fees. Completed. FAA’s Command Center holds several teleconferences daily with ATC managers and air carriers to discuss weather conditions and other factors causing delays at specific locations and to determine appropriate solutions for congestion in the NAS. Ongoing. DOD is releasing some airspace for commercial use at certain times of the day, mainly in areas east of the Mississippi. Discussions are continuing on further use of military airspace by commercial carriers. Objective operations in the Buckeye Military Operations Area (located in the Ohio Valley), and centralizing information on special-use airspace. This is a long-term initiative with FAA, airport operators, and aviation industry groups forming airport capacity design teams at various airports to identify and evaluate alternative means, including procedural and technological innovations, and to enhance existing airport capacity to handle future demand. FAA analyzed the capacity of 31 key airports in the NAS. Ongoing. Several reports have been issued. Since 1998, capacity reports or tactical initiative studies have been produced for six airports. Three more studies are in progress. This program is designed to focus government and industry efforts on specific enhancements (e.g., traffic flow and hardware problems) needed to improve the free flow of traffic in the NAS. This program provides financial assistance to civilian sponsors of military airfields that are converted to civilian or joint military-civilian use to enhance airport system capacity and reduce flight delays. AIR-21 authorized adding 3 airports to the program (from 12 to 15 participants). Completed. In April 2001, FAA released its final report on all of the airports that it studied. Ongoing. FAA has identified Houston Bush Intercontinental Airport for a demonstration project which started in fiscal year 2001. The project will look at expanding the use of flight management systems and global positioning system (GPS) capabilities to accommodate additional traffic resulting from the construction of a new runway. Completed. FAA selected three new airports on 1/8/01. Mather Air Force Base as a backup airport for Sacramento International’s cargo and general aviation (GA) traffic March Air Force Base as a backup airport for Los Angeles International’s cargo traffic Gray Army Airfield as a joint-use commercial service (primary) airport for Killeen, Temple, and Fort Hood, TX Ongoing. “Free flight” tools are being used at select locations throughout the system and results are being evaluated. Phase 1 is scheduled for completion in 2002. Ongoing. Two years of operational demonstrations and flights were successful in Alaska and the Ohio Valley region. Automatic Dependent Surveillance-Broadcast (ADS-B) services are now being provided in the Bethel, AK, area and test infrastructure has been established in Memphis, TN, and Louisville, KY. Objective aircraft-to-aircraft separations to land aircraft more efficiently. This effort involves the replacement of legacy software and interfaces that make up the flight data processing and radar data processing automation systems. Status Additional operational demonstrations and evaluations are planned. A preferred ADS-B link technology will be selected in 2001. Ongoing. This program began in early 2000 and completion is targeted for 2008. Funding in fiscal year 2001 is for initial analyses and a functional audit for the program. Ongoing. NOCC opened at the Command Center in March 1999. Deployment of NIMS is expected in 2003 and completion is expected in 2005. The three regional OCCs opened in June 2001, and full capabilities are expected to be in place in 2003. Ongoing. TARGETS is being tested at eight major airports, and FAA expects to distribute the process and tools to other airports in the future. This initiative focuses on consolidating and streamlining the development and approval of navigation procedures and routes. On a test basis, FAA and carriers are using the terminal area route generation, evaluation, and traffic simulation (TARGETS) tool to create area navigation (RNAV) arrival and departure procedures. This initiative focuses on developing meaningful operational performance measures to help manage the NAS and improve operational efficiency. Through a designated reporting system, 10 participating carriers provide FAA with times for taxi-out, takeoff, on-ground, and taxi-in data at 21 airports. FAA then provides these data to its ATC facilities and to airports and airlines. When operational, LAAS is expected to yield the high accuracy, availability, and integrity needed for category I, II, and III precision approaches (instrument landings) in all weather conditions. If successful, FAA plans to purchase up to 160 LAAS installations (46 category I and 114 category III). Also, LAAS can increase the use of existing airports that currently are not available due to restricted areas or approaches. Ongoing. Data have been generated and disseminated since January 2000, but the system is still being validated. FAA is developing 18 new metrics related to the Command Center’s operations—11 have been agreed upon by FAA and the airlines, and the remaining 7 are still being examined. Ongoing. Using a LAAS test prototype, FAA has flown over 240 approaches with a Boeing 727 and a Falcon 20 aircraft. FAA expects to have at least one category I LAAS installed and authorized for public use by 2002 and a category III LAAS available by late 2005. Full deployment of LAAS is scheduled to begin in 2002 and be completed by 2010. Initiative sponsor and description Improving environmental approval process - Federal Aviation Administration Streamline and expedite environmental reviews for airport capacity projects This project will identify environmental delays, streamline environmental procedures, and expedite Environmental Impact Statements (EIS) for major runway projects at large hub primary airports. Ongoing. In April, FAA submitted its Report to Congress on the environmental review of airport improvement projects. FAA will assign an EIS team of experts to each new major EIS and improve interagency environmental coordination at state and federal levels. It will also increase environmental resources through new hires in the Airports Office, reimbursable agreements with airports to fund expedited EISs, and amendments of existing third-party contracts for more consultant support. FAA also plans to reduce the amounts and types of environmental documentation required and to issue a “best practices” guide. Improving environmental approval process - American Association of Airport Executives (AAAE) and Airports Council International-North America (ACI-NA) Expedited Airport System Enhancement The goal of this proposal is to speed runway construction and other critical expansion projects at the nation’s most congested airports by both streamlining and expediting current environmental reviews. Ongoing. AAAE and ACI-NA introduced the legislative proposal to the Congress and the administration in March 2001. Completed. The isolation policy has been implemented at Chicago O’Hare. AVOSS was tested at Dallas-Fort Worth International Airport with positive results. Adjusting flight times throughout its system to reflect the longer gate-to-gate departure and arrival times being experienced those of American Eagle) are reviewed continuously and revised with published schedules. Reviewing and adjusting the schedule of American Eagle operations to minimize crowding in ramp areas reviewed to improve operational efficiency. Testing of data-link capabilities between on four of American’s 767 aircraft serving European destinations; FAA tests are planned for the Miami Center in 2002 with over 24 of American’s 737-800 aircraft. Completed. Adjustments to Continental’s flight schedules at Newark were made in 2000. Service adjustments were made in and medium-sized cities to relieve congestion in some of their hubs 2000. Objective Collaborating with FAA and the Port Authority on new equipment at Newark and other New York metropolitan area airports obtaining the Integrated Terminal Weather System (ITWS) prototype at Newark, which benefits all New York area airports. Ongoing. Continental’s nationwide flight times are reviewed six times each year and adjusted as necessary. ATC to improve capacity at the Atlanta hub, and Delta has rescheduled propeller aircraft traffic outside of jet arrival and departure banks to improve flow. Adjusting the schedule structure at Atlanta Hartsfield Airport for spring 2001 to even out travel peaks aircraft to specific city-pair routes each day to minimize the domino effect of delays at any single major airport. Adjusting flight times throughout its system to reflect actual gate-to-gate departure and arrival times Installing Heads-Up Guidance System (HUGS) on its aircraft as a navigational aid during poor visibility weather conditions made in early 2001. Ongoing. Delta’s flight times (as well as those of owned subsidiaries ComAir and Atlantic Southeast) are reviewed continuously and revised four times each year. Delta’s new 737-800 aircraft and the regional jets for ComAir and Atlantic Southeast are being delivered with the HUGS installed. The MD-88 fleet will be retrofitted in the future. Guardia in 1999 and is confining its New York operations to Newark, JFK, and Stewart. Ongoing. LAAS at Memphis is operational, and FedEx has equipped one aircraft with a GPS landing system for testing GPS approaches. Research continues on the use of HUGS and FLIR. Operational evaluation of Safe Flight 21 surface situational awareness applications conducted in 2001, and additional demonstrations at Memphis are planned for 2002. Investing in technology for the meteorological department to assist in poor department began using turbulence avoidance systems to plan alternative weather planning and turbulence avoidance routing. Adjusting flight times throughout its system to reflect actual gate-to-gate departure and arrival times will continue to work with FAA to establish procedures. Northwest’s flight times are adjusted eight times each year. Manchester, NH, and Portland, ME, in the last year. Completed. Schedule revisions were incorporated during periods of low demand Adjusting flight times throughout its system to reflect actual gate-to-gate departure and arrival times into the January 2001 schedule for the most congested airports served by Southwest. The schedule published in June 2001 reflected additional revisions. Francisco International Airport occurred on 3/5/01. The business approach at Southwest is generally designed to serve outlying airports. In-house flight planning system was implemented in 1997. The air traffic specialist position was metropolitan areas and withdrawing from San Francisco International Airport filled in December 2000. Ongoing. Flight times are continuously study on-time performance and find ways to reduce delays and cancellations Exploring ways to use data-links to provide reviewed, and revisions are incorporated into published schedules. Recommendations from Southwest’s Punctuality Team will be submitted on a periodic basis. Studies of data-link use are still in progress. Completed. Aircraft have been isolated in a limited number of markets. Revising ramp parking assignments for has been implemented, reducing taxi times for regional aircraft by up to 50 percent. Adjusting flight times throughout its system to reflect actual gate-to-gate departure and arrival times reviewed, and revisions are incorporated into published schedules. The experience with DDTC was (DDTC) at Dulles to digitally provide taxi times and routes to the cockpit successful; attempts are under way to get similar systems installed at other locations. United is working with FAA, NATCA, and the Air Line Pilots Association to agree on the use of LAHSO at O’Hare. Isolating aircraft routes that pass through Philadelphia and La Guardia to isolate systemwide delays are isolated to the extent possible. Developing a “slot-swapping” model to reduce specific flight and overall system delays was implemented to enable US Airways’ air traffic manager to make decisions more quickly. Deploying surface movement technology at congested airports to reduce ground congestion and taxi times. Increasing the number of available backup aircraft from 11 to 16 technology has been installed at Philadelphia. Additional backup aircraft were added in August 2000. Redesigning the schedule structure and reducing service at its Philadelphia hub to match departure and arrival activity to the capacity of the airport structure were implemented in June 2001. Adjusting flight times throughout its system to reflect actual gate-to-gate departures and arrivals continuously, and revisions are incorporated into published schedules. Aloft technology capability is planned to revise the flight plans of flights that are already en route for implementation in 2002. Obtaining larger Airbus A-321 aircraft to reduce frequency in selected markets in February 2001, with more deliveries planned through the end of the year. ATC personnel to implement new technology to enable dual landings on parallel runways during poor weather conditions monitor is installed and certified, but is not in operation. Work for similar technology at the Pittsburgh and Charlotte hubs is ongoing. US Airways is in the process of traffic and airspace management Implementing 21 additional initiatives to improve schedule reliability, including severe weather recovery plans, aircraft use improvements, crew scheduling, navigation capabilities, and other technological investments implementing 21 additional initiatives to improve schedule reliability; it continues to work with FAA on air traffic and airspace management. Completed. Runway reconstruction was completed in 1999. M14 are scheduled for completion in 2002; runway 8R taxiway will be completed and taxiway L will be extended in 2003. Intersection upgrades scheduled for completion in 2002. will be active in 2002. Status EIS for the new runway was issued in September 2001; it is scheduled for completion in 2005. Completed. The ground control station is currently operational on an as- needed basis. aid controllers Implementing a new gate-leasing policy— airlines must “use or lose” The “use or lose” policy is in effect for US Airways, American, Delta, and United. Promoting the use of regional airports to reduce flight demand at Logan taxiway are undergoing environmental review. Efforts to promote regional airports began approximately 4 years ago; Massachusetts is spending $500,000 in 2001 for a public marketing campaign. Initiating the World Gateway Program, which includes construction of 2 new terminals, the reconstruction of 2 concourses (adding 20 to 30 gates), and the extension and reconfiguration of taxiways currently under environmental review and is scheduled for completion in 2008. The technology initiative is under FAA review; equipment installation is expected to be completed in 2006. Undertaking the Chicago Airport System Strategic Capacity Initiative to share costs with FAA for installation of navigation aids and surface movement management systems. Ongoing. The capacity enhancement team meets monthly. design team to develop capacity-enhancing options technologies are being installed and tested. Removal of runway restrictions is under environmental review. The design layout for the new runway greater use of regional jets Constructing a new runway Major improvements recently completed or under way include: Studying jet blasts to more precisely is being reviewed. Completed. The jet blast study has been determine the minimum intervals between aircraft departures and arrivals completed and separations have been reduced. N.Y. subway system and commuter rail by 2002 and make a second connection by 2003. The runway upgrade is in design testing. Objective Obtaining Port Authority funding of the design. June 2001 and is scheduled for commissioning in mid-2002. The Port Authority is currently funding a prototype ITWS while FAA develops a production system that is planned for installation in 2002. The task force meets quarterly. The airspace redesign is planned for completion by 2007. Completed. The slot lottery became effective on 1/31/01. lottery” Strengthening the runway deck to completed. Removing obstacles to the runway to removed. Ongoing. The Port Authority is currently funding a prototype ITWS while FAA develops a production system that is planned for installation in 2002. New York to reduce operating restrictions and conflicts with other area airports completion by 2007. Completed. The high-speed exits were added in 1999. The new terminal opened in 1998. The PRM was installed in 1999. The converging runway display aid is runways during bad weather conditions Installing a converging runway display aid to assist controllers during the worst weather conditions operational. Ongoing. The new runway should be completed in 2005. The airport is maintaining control of the gates as they are added. Ongoing. FAA has approved the relocation of eliminate crossing of the airport’s two runways. the threshold and is now working with the Air Transport Association to obtain concurrence from the airlines. The additional 10 gates will be operational in 2004. The taxiway addition is being designed and has an estimated completion date of late 2003. Adding 50 to 75 gates Encouraging the use of the nearby Ontario 7/26/01. The project is to be completed by 2015. International Airport and the EIS plan are being revised to reflect the addition of gates. To encourage the use of Ontario Airport, Los Angeles International Airport is supporting an application by United Parcel Service for freight service from Ontario to China. Ongoing. The runway has received its environmental approvals, and its design is complete. Scheduled completion date is mid-to-late 2002. The new linear terminal is scheduled for completion in 2006. Reconfiguring the north/south taxiway to concourse is scheduled for completion by 2007. The hold pad should be completed in 2003. Ongoing. The new runway should be completed in December 2003. The new terminal was opened in May gates and expanding an existing terminal to add 12 to 13 mainline gates and 29 regional jet gates 2001, and the existing terminal expansion should be completed in 2002. The airport is maintaining control of terminal to ensure maximum flexibility Reconfiguring taxiways to avoid runway gates at the new terminal as they are built. The taxiway reconfiguration should be completed in December 2003. The deicing pads should be completed in December 2003. Completed. The runway extension was completed in 1999. FAA. Obtaining Port Authority funding for ITWS Establishing an ongoing capacity are under way. New approach procedures are under development. New York to reduce operating restrictions and conflicts with other area airports prototype ITWS while FAA develops a production system that is planned for installation in 2002. The task force meets quarterly. The airspace redesign is planned for completion by 2007. Status Completed. The new runway became operational in December 1999. Adding a new visual approach to runway 1999. Ongoing. The PRM is installed and certification is expected by the end of 2001. The deicing pad will be completed in late 2001. The international terminal should be international terminal that will add 12 widebody gates and a commuter terminal that will add 38 gates; an expansion to a third concourse will add 4 more gates Construction of two new ramp control completed in early 2002, and the commuter terminal was completed in June 2001. Four additional gates on concourse D will be completed in late December 2001 or early 2002. The first ramp control tower was completed in July 2001, and the second should be operational by the end of 2001. The capacity task force meets every 3 months. The airspace redesign group meets approximately every 2 months, and the redesign of airspace is scheduled to be completed in about 5 years. Completed. The new runway became operational in October 2000. Widening and adding taxiways Adding concourses and gates in existing completed by 2002. completed by 2002. The new terminal will be completed in offering GA hangars at nearby reliever airports on a priority basis 2008. June 2000; the remaining hangars should be removed as replacement space becomes available. The tower and TRACON projects are under discussion with FAA. Relocating GA fixed based operators to the south side of the airport Increasing the number of instrument landing systems (ILS) Relocation of fixed based operators’ facilities is scheduled for completion in 2002. The remaining ILS is to be added with runway reconstruction. Completed. The runway lighting was completed in 2001. guard lights to a crosswind runway to reduce aircraft separation Improving taxiway lighting were completed in 2001. Ongoing. Taxiway E is being planned and will reduce hold short and deicing delays be completed in 2002. Taxiways F and P were completed in 2001. Taxiway Y was completed in summer 2000. The new runway is in the early planning stage. Ongoing. The taxiway will be completed in 2002. The new concourse is undergoing environmental review and is scheduled for completion by 2005. United Airlines to refine its flight schedule Introducing a PRM and a simultaneous offset instrument approach (SOIA) Airlines was implemented in November 2000. Ongoing. Installation of the PRM and the SOIA is under way. operations in all weather conditions undergoing environmental analysis. Ongoing. The runway is undergoing environmental review and construction is scheduled for completion in 2006. The new concourse is scheduled for completion by 2003. Miscellaneous initiatives with indirect impact – Federal Aviation Administration Challenger Session 2000 This November 2000 seminar brought together aviation community participants to exchange views on approaches to reduce flight delays. Completed. A transcript of the seminar proceedings was prepared and made available on the Internet. Completed. A report on best practices was released in October 2000. The Office of the Secretary of Transportation (OST) initiated this project to identify (1) the “best practices” used by airlines and airports to improve consumer access to flight information and (2) the services that minimize the adverse effects of flight delays and cancellations on consumers. DOT initiated this committee to address requirements in AIR-21 that the Department take steps to consider changes to current on- time reporting by airlines (14 CFR part 234) to provide clear information to the public about the nature and the sources of flight delays and cancellations. This document provides consumers with information to help them reduce their chances of encountering flight delays and assist them in coping with delays. This monthly report provides consumers with information to make a more informed choice when making a flight reservation. Status sources of flight delays and cancellations. Completed. This document was issued on 11/2/00 and is available on the Internet and in hard copy. It is the latest in a series of fact sheets for air travelers, which are issued by DOT’s Aviation Consumer Protection Division. Completed. This information is now provided in DOT’s monthly Air Travel Consumer Report available on the Internet. “Free flight” is defined as a safe and efficient operating capability under instrument flight rules in which the pilots have the freedom to select their flight path and speed. Air traffic restrictions are only imposed to ensure separation between planes, keep an airplane from exceeding an airport’s capacity, prevent unauthorized flight through special use airspace, and ensure flight safety. Restrictions to correct the identified problem are limited in extent and duration. Any activity that removes restrictions represents a move toward free flight. Presented below are additional details about each of the seven measures listed in table 5 of this report. Also shown is additional information from previous studies that have examined—and in some cases advocated—one or more of these measures. This measure, which involves adding new airports in metropolitan areas to augment existing congested airports, has the potential to profoundly impact the capacity of the entire system, according to past studies. These studies say that building new airports in congested metropolitan areas holds perhaps the greatest promise for providing the capacity needed to meet rapid passenger growth. Also, multiple airports in certain areas, like those that exist in New York and the greater Los Angeles area, each have their own full-service patterns and can offer passengers convenience and improved accessibility. However, past studies were not optimistic about the probability that many new airports will materialize, given a number of formidable barriers, which include (1) finding a suitable site that does not conflict with other potential uses of the land, (2) overcoming concerns about noise and other environmental problems in sensitive areas, (3) providing adequate landside access (e.g., roads), (4) justifying the large investment required to build a new facility, and (5) gaining the support and financial backing from incumbent airlines. Several past studies have discussed the development of a new type of airport, called “wayports,” which differ from conventional airports in that they are further removed from large metropolitan areas and serve a special purpose. Under the wayport concept, such airports would be developed— either by using existing underused regional airports and former military bases or by building new airports—to supplement the current capacity needs of congested or capacity-constrained major hubs. Wayports are envisioned to be potentially large facilities—located on the fringe of or away from large metropolitan areas and near smaller cities (100,000 to 200,000 population)—that would serve mainly as transfer points for long- distance air travel routes. Except for nonstop service from one city to another (called “city pairs”), all flights would connect at these points to accomplish passenger transfer. As envisioned, service between these transfer points could be supplied either by large aircraft or by conventional aircraft operating on a high-frequency schedule. Connection between wayports and major cities in the region could be provided by short-haul aircraft or high-speed ground transportation, such as rail or highway. Wayports would be regional multimodal transportation hubs offering connections to surrounding cities by whatever means of transport that would be cost-effective. They would also serve as cargo and mail handling centers. Building wayports may not face the degree of opposition that building new airports would—especially from local communities—because wayports would be further away from large urban centers. Also, some studies have suggested that wayports would be less costly than comparable airports built in major metropolitan areas, could provide more open competition among airlines, and would likely result in less airspace congestion because of their location further away from congested metropolitan areas. However, the wayport concept has never been tried and gaining acceptance from airlines, sponsoring authorities, and affected communities might prove difficult. This measure involves the creation of more regional airports at underused airports located about 50 miles from congested metropolitan airports. Many such underused facilities already exist throughout the nation. These regional airports could be used in two scenarios. Under one scenario, the regional airports would be similar to wayports, except on a smaller scale. They would be used mainly for transfer passengers, particularly at large, congested hubs that have a large percentage of transfer passengers. Under the second scenario, a network of regional airports located around a major congested hub would take origin and destination passengers diverted from the large hub. The regional airports around Boston Logan Airport are an example. The Massachusetts Port Authority (MASSPORT), which operates Logan, is working with state aviation directors and transportation agencies to make more efficient use of regional airports around Logan, including Manchester (New Hampshire), Worcester (Massachusetts), and T.F. Green (Providence, Rhode Island), to steer millions of new origin and destination passengers to these airports by 2010. All of these regional airports are within an hour’s drive of Logan. Mid-America Airport near St. Louis is another example of a potential candidate for a regional airport for St. Louis-Lambert Field—a major hub for American Airlines. Located just 24 miles from downtown St. Louis, Mid-America is a joint-use civilian and military facility colocated with Scott Air Force Base. It has two, well- spaced runways over 8,000 feet long; it has substantial excess capacity. Regardless of which scenario is used, the implementation of this measure could provide needed system capacity and accommodate some of the growth in air travel over the short term without adding significantly to the congestion and delay now experienced at the busiest metropolitan airports, according to past studies. Also, the cost to upgrade and expand the existing facilities would likely be less than new airports and possibly somewhat less than wayports. To the extent that regional airports were located in less densely populated areas, concerns with noise and conflicting land use may be less than at large metropolitan airports. Like the previous two measures, however, this measure would require at least one airline to commit to incorporating a regional airport into its long-range hubbing service system. Similarly, the airport must secure the financial resources necessary to develop the airport to its full capacity. This measure relies on market forces to redistribute flight demand and allocate existing airport resources efficiently. Past studies and current literature suggest that the current airport access policies and the approach for determining landing fees have created some incentives that lead to the inefficient use of existing capacity at many congested airports. Two policies in particular have been cited as influencing airline behavior in this regard. The first policy deals with an aircraft’s access to airports, the second with the fees that airports can charge for landing. By law, all aircraft—corporate and other general aviation aircraft, cargo carriers, and airlines—have equal landing access rights. This applies to small and large aircraft alike. When they land, laws and regulations require that airports charge the aircraft operators in a nondiscriminatory, reasonable basis— generally on the landed weight of each aircraft. Although this fee structure is fine for noncongested airports, it can have profound consequences at congested ones. Some economists and industry representatives contend that these policies allow airlines—which are driven by competitive pressures and profit-maximizing motives—to overschedule flights at busy airports during peak hours and use smaller aircraft and more frequent flights to meet passenger demand. They also contend that the current system provides little incentive for airlines or general aviation aircraft to use other nearby airports that have underused capacity. Two market-based methods are most commonly mentioned to alter the behavior of airlines and passengers at congested airports to better ensure that existing capacity is used efficiently—differential pricing and auctions. Adopting a differential pricing approach would mean that landing fees would be higher at times when demand exceeded the availability of landing slots and lower at other times. An auction approach would allow airports to periodically auction a fixed number of takeoff and landing slots—equal to the airport’s capacity—to the highest bidders. For example, an airport, in conjunction with FAA, could determine its per- quarter-hour takeoff and landing capacity, and a competitive bidding process among carriers could determine fees during each period. The two methods differ to a degree in the simplicity of implementation and the certainty they would provide about congestion levels. Of the two, differential pricing is the simpler to implement, but this method provides less certainty about congestion levels. Auctioning takeoff and landing slots provides greater certainty about congestion levels, but entails a more complex design and may be more costly to operate. Because of increased congestion and delays at some airports, airport managers and FAA were seriously studying this option before September 11, 2001. For example, FAA and the New York/New Jersey Port Authority were studying market-based and administrative solutions for use at La Guardia to bring demand and airport capacity into alignment and reduce delays. It was anticipated that some form of demand management approach would be adopted there sometime next year. However, citing the significant decrease in operations at La Guardia following the terrorist attacks, FAA has suspended this study. Proponents of a market-based approach cite several advantages, namely that (1) this approach will bring about needed changes without artificial or forced administrative or regulatory changes, (2) the costs of implementing it are relatively modest, and (3) increased revenues derived from various forms of congestion pricing can be used by airports to fund needed capital development projects. Critics say this approach could increase passenger ticket prices; reduce access for financially weaker small carriers; and adversely affect service to small communities, which would be less likely than large cities to retain their service to the capacity-constrained airports. Past studies have mentioned a number of administrative and regulatory methods to manage flight demand. These methods include maintaining or expanding slot restrictions, adjusting airline flight schedules, diverting smaller aircraft to reliever airports, using larger aircraft at congested airports, and developing more flexible gate access policies. Each method is described below. Since 1969, four airports—La Guardia, JFK International, Washington Reagan National, and Chicago O’Hare—have operated under a slot system, whereby the number of flight operations is capped and takeoff and landing rights (slots) are allocated administratively. Such systems are often done through grandfathering, a lottery, or some other nonmarket mechanism. These slots have been somewhat effective in controlling delays at these airports. However, provisions in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21) would eliminate the slot system at three of these airports by 2007. At La Guardia, AIR-21 provided immediate exemptions from the slot system for flights by new entrants and flights serving small communities. Almost immediately, the airport was overwhelmed with applications for over 600 new flights to and from the airport. Because the requests far exceeded the capacity of La Guardia, FAA in cooperation with the airport implemented a temporary lottery to allocate a limited number of slots and requested that a study of market- based alternatives be completed. However, due to the reduction in aircraft operations at La Guardia following the terrorist attacks this year, FAA has delayed this study until the long-term impact of September 11 on traffic at La Guardia is better understood. Researchers have concluded that slot systems can be effective in controlling congestion at busy airports, but they also note that potentially slot systems can pose barriers to competition and adversely affect service to smaller communities, which are two important congressional concerns. An alternative to using slot systems is to have airlines make voluntary flight schedule adjustments to even out periods of peak demand. In an attempt to reduce congestion, some airlines have recently done this on their own in limited situations. However, they are prohibited by antitrust provisions of current law from discussing flight schedules with other airlines. Two bills before the Congress (H.R. 1407 and S. 633) would allow air carriers to discuss voluntary flight schedule changes at congested airports to reduce delays. The ability of airlines to agree on schedule adjustments to even out peaks in air traffic at crowded airports is uncertain. Historically, critics point to the failure of airline scheduling committees that existed for the same purpose in the 1970s and 1980s. The committees—made up of airlines serving the four slot-controlled airports—worked reasonably well before deregulation in 1978, but afterwards the committees found it increasingly difficult to agree on voluntary adjustments. Deregulation brought fierce competition and a sizable drop in passenger fares, a corresponding growth in passenger demand, and increased profit opportunities. This caused airline overscheduling during congested times to satisfy passenger demand and maximize profits. As experience has shown, voluntary flight schedule adjustments by one airline can create slots for other airlines to add to their schedules. This measure would require much of the general aviation aircraft (including corporate aircraft) and aircraft involved in air taxi service to shift from congested airports to nearby reliever airports, which are underused. Currently, smaller aircraft account for at least 25 percent of all air traffic at most of the congested airports in the nation—many of which have expensive runway projects under way. For example, general aviation aircraft and air taxi flights at four severely capacity-constrained airports, La Guardia, Kennedy, Philadelphia, and Boston Logan, account for about 31, 34, 41, and 46 percent of the total operations at each airport, respectively. Diverting smaller aircraft away from congested metropolitan airports to reliever airports could free up capacity for use by larger commercial air traffic. For example, congestion pricing mechanisms implemented at Boston Logan in 1988 and the three New York airports (Kennedy, Newark, and La Guardia) in 1968 produced sizable results. Much of the general aviation aircraft abandoned Logan for secondary airports, and delays at Boston Logan dropped. After a $25 premium fee was imposed for peak- hour use of runways at the three New York airports, general aviation aircraft use dropped 30 percent. Adopting this kind of measure on a nationwide basis would likely require a change in the law that requires airports to provide equal access to all aircraft. Through regulatory means, this measure would require airlines to fly larger aircraft into congested airports that are currently being served with smaller aircraft. Currently, airlines decide the size of aircraft to fly on their routes. The average size of aircraft serving airports today is getting progressively smaller, because airlines are using smaller aircraft and more frequent flights to meet passenger preferences. For example, in 1999, there were actually 10 fewer seats per aircraft, on average, than in 1993. In 2000, at La Guardia, one of the most congested airports in America, 5 percent of the passengers traveled on 25 percent of the planes—a reality of the incentives to which the airlines are reacting. Flying larger aircraft (that were full or nearly full) into congested airports could allow airlines to accommodate more passenger growth and potentially decrease flight frequencies, which ultimately could decrease delays and improve the use of existing facilities at crowded airports. However, the unilateral imposition of administrative restrictions by airports on the size of aircraft allowed into congested airports could violate provisions of current laws that require airports to allow equal access to all aircraft. Implementation would likely require a change in such statutory provisions. This measure would require altering contractual arrangements or use agreements between airlines and airports, which specify the air carriers’ use of the airports’ facilities. The nature and longevity of two agreements in particular—gate leasing arrangements and majority-in-interest (MII) clauses—can potentially result in the inefficient use of airport facilities and may prevent the airport from undertaking capacity-enhancing capital projects. The terms of gate leasing arrangements can be particularly critical in ensuring the efficient use of airport capacity. By law, airports are forbidden from denying an air carrier reasonable access to airport facilities. However, some large commercial airports have long-term “exclusive use” agreements with airlines for most of their gates, which means that even if a gate is not in use, no other airline can use it without permission from the signatory airline. According to DOT, this practice is contrary to the legal requirement for reasonable access. By locking up all of the gates, even if they are underused, airlines can limit capacity at affected congested airports, and, if prevalent at a number of airports, can effectively limit the capacity of the entire system. Restrictive practices at exclusive use gates are becoming less prevalent also due to the passenger facility charge (PFC) program requirement that competitive access must be ensured at a carrier’s exclusively leased gates if that carrier uses PFC- financed gates. Modification of MII clauses is equally important in ensuring that future capacity can be realized. Current MII clauses give dominant airlines at an airport “veto” power, in effect, over large capital projects that can increase capacity. Encouraging or even requiring airports to develop more flexible, shorter term gate and MII agreements is a way to better ensure that existing airport capacity is enhanced. However, this practice would not be doable immediately in many cases, since use agreements between airlines and airports are usually long-term contracts. Airports cannot unilaterally renegotiate shorter or more flexible agreements until these long-term agreements expire. Unlike other measures that concentrate on enhancing capacity through airport improvements, this category of measures would enhance airport capacity by providing alternative transportation modes to move passengers from one location to another. This measure would involve developing high-speed ground transportation, such as rail, between large metropolitan cities. A portion of a congested airport’s capacity may be freed up by diverting some shorter distance travel demand to high-speed ground transportation. As an alternative to air travel, this measure would be focused mainly on high-density routes of 200 to 500 miles. DOT has designated 11 high-speed rail corridors in U.S. locations, such as the Northeast, California, Chicago, and the Pacific Northwest. Work is under way at several locations, most notably in the Northeast Corridor, and, when completed, could provide viable alternatives to air travel, thereby alleviating the pressure on the air transport system. High-speed trains have been used successfully in Europe and Asia and have proven to be viable alternatives for air travel in some cases. For example, the French Railway company recently initiated service between Paris and Marseilles via a high-speed train; this service reduces the travel time for the 500-mile trip from 5 to 3 hours by rail. The train is expected to siphon off as much as one-fourth of the 2.5 million passengers who travel by air between these cities each year. Already, one airline serving this route has discontinued its service between the two cities due to the added competition of the new rail service. Although this measure has been tried successfully in Europe and Asia, its cost-effectiveness and technical feasibility in this country have not been demonstrated. For example, trains on Amtrak’s Metroliner service between New York and Washington, D.C., travel up to 125 miles per hour for portions of the trip. However, Amtrak’s estimate of the cost to fully develop the federally designated high-speed rail corridors and the Northeast Corridor is $50 billion to $70 billion over 20 years. Whether ridership will be sufficient to cover this cost is unknown. In the end, competitive rates and comparable portal-to-portal travel time would be keys to the success of this alternative. Another possible application of high-speed ground transportation is to facilitate passenger movement between airports or from city centers to new airports located on the fringe or outside of the metropolitan area. For example, in the long term, MASSPORT plans to connect Boston Logan International Airport to five nearby regional airports by ground transportation, using Logan for long-haul flights and the regional airports for short- and medium-haul flights. One study also suggested that high- speed surface transportation could help the development of wayports, since it would provide links to major cities in the region served without imposing a burden on the airspace and runways at the wayport. Like the previous measure, the cost-effectiveness of such systems would have to be demonstrated in the context of an overall regional airport system to increase capacity. In addition to those named above, Karyn I. Angulo, Jonathan Bachman, Steven N. Calvo, Jay Cherlow, JayEtta Z. Hecker, David Hooper, Christopher M. Jones, Joseph D. Kile, Steven C. Martin, LuAnn Moy, and Stanley G. Stenersen made significant contributions to this report.
Initiatives to address flight delays include adding new runways to accommodate more aircraft and better coordinating efforts to adjust to spring and summer storms. Although most of these efforts were developed separately, the Federal Aviation Administration (FAA) has incorporated many of them into an Operational Evolution Plan (OEP), which is designed to give more focus to these initiatives. FAA acknowledges that the plan is not intended as a final solution to congestion and delay problems. The plan focuses on initiatives that can be implemented within 10 years and generally excludes approaches lacking widespread support across stakeholder groups. The current initiatives, if successful, will add substantial capacity to the nation's air transport system. Even so, these efforts are unlikely to prevent delays from becoming worse unless the reduced traffic levels resulting from the events of September 11 persist. One key reason is that most delay-prone airports have limited ability to increase their capacity, especially by adding new runways--the main capacity-building element of OEP. The air transport system has long-term needs beyond the initiatives now under way. One initiative would add new capacity--not by adding runways to existing capacity-constrained airports, but rather by building entirely new airports or using nearby airports with available capacity. Another would manage and distribute demand within the system's existing capacity. A third would develop other modes of intercity travel, such as, but not limited to, high-speed rail where metropolitan areas are relatively close together. Because of increasing demands on the air transport system or because of the need to meet security and other concerns prompted by the recent terrorist attacks, the federal government will need to assume a central role.
EPA was established in 1970 to protect human health and safeguard the natural environment. EPA is staffed with large numbers of technically trained personnel; more than half of its employees are engineers, scientists, and environmental protection specialists. Today, it employs 18,000 people. EPA is headquartered in Washington, D.C., and has 10 regional offices and laboratories across the country. EPA’s OCR, a staff office in the Office of the Administrator, is responsible for managing the agency’s discrimination complaints program. This program is intended to ensure that all EPA employees and applicants for employment are afforded equal employment and advancement opportunities free of discrimination. Moreover, OCR is responsible for the timely processing and resolution of discrimination complaints. Specifically, discrimination complaints are processed by OCR’s Compliance and Internal Resolution Team. Over the years, allegations and complaints have been made that EPA tolerates discrimination, retaliates against whistleblowers, and fails to take corrective action on these matters. The agency’s policies and practices were further questioned when an employee won a high profile court case in 2000. EPA’s EEO practices have also attracted congressional interest in general and about untimely complaint processing in particular. Hearings before the House Committee on Science in October 2000 highlighted alleged discriminatory conduct. EPA, like other federal agencies, is required to comply with the nation’s civil rights laws. Title VII of the Civil Rights Act of 1964, as amended, makes it illegal for employers to discriminate against their employees or job applicants on the basis of race, color, religion, sex, or national origin (42 U.S.C. 2000e et.seq). The Equal Pay Act of 1963 protects men and women who perform substantially equal work in the same establishment from sex-based wage discrimination (29 U.S.C. 206(b)). The Age Discrimination in Employment Act of 1967, as amended, prohibits employment discrimination against individuals who are 40 years of age and older (29 U.S.C. 621 et seq.). Sections 501 and 505 of the Rehabilitation Act of 1973, as amended, prohibit discrimination against qualified individuals with disabilities who work or apply to work in the federal government (29 U.S.C. 791 and 794a). Federal agencies are required to make reasonable accommodations to qualified employees or applicants with disabilities except when such accommodation would cause an undue hardship. EEOC is responsible for enforcing all of these laws. In addition, a person who files a complaint or participates in an investigation of an EEO complaint or who opposes an employment practice made illegal under any of the statutes enforced by EEOC is protected from retaliation or reprisal. EPA’s EEO program, like those in other agencies, is subject to several regulations. EPA is responsible for developing and implementing its own equal employment program, including establishing or making available alternative dispute resolution programs and adopting complaint processing procedures as required by 29 C.F.R. Part 1614. EEOC Management Directive 110 (Federal Complaints Processing Manual) provides general guidance on how agencies should process employment discrimination complaints. Agencies are also required to provide EEO discrimination complaint data to EEOC (29 C.F.R.1614.602.). EEOC compiles these data and reports them to Congress each year in the EEOC Annual Report on the Federal Workforce. Information contained in EPA’s discrimination complaint data system was unreliable because of data entry problems. EPA officials also maintain that the computer software, which was obtained from a now defunct supplier, was flawed and not able to report data accurately. Reliable discrimination complaint data are necessary for EPA’s OCR to track complaints and look for trends that might indicate the need for specific actions and to respond to EEOC reporting requirements. EPA recently implemented a new EEO data system and is taking steps to train staff members and hold them accountable for maintaining the data system. Officials attributed data system weaknesses in part to a now defunct data management company whose data system was used to track and process discrimination complaint information. Officials said the system was flawed and was further compromised because EPA’s EEO specialists did not always enter, update, or maintain discrimination complaint data. As a result, EPA had difficulty providing accurate EEO information. Moreover, EPA had trouble discerning if there are trends in workplace problems that lead to EEO complaints; this in turn has inhibited understanding sources of conflict and planning corrective actions. EEOC regulations point out that agencies should make every effort to ensure accurate record keeping and reporting of EEO data. Data fosters transparency, which provides an incentive to improve performance and enhance the image of the agency in the eyes of both its employees and the public. We initially requested discrimination complaint data for a 10-year period (1991-2000). However, OCR officials said they had no confidence in discrimination complaint data prior to fiscal year 1995 because the data are unreliable and source documents were not available to permit its reconstruction. OCR provided discrimination complaint data for fiscal years 1995 through 2002; however, in reviewing these data, we found that the information was incorrect. These data understated the actual number of discrimination complaints on hand, the number of new discrimination complaints filed, the number of complaints closed, and the year ending numbers. Also the data provided to us differed from the discrimination complaint data reported to EEOC. For example, the number of discrimination complaints on hand at the end of fiscal year 2000 was reported to us as 176, but EPA reported to EEOC that the number was 264. The number of new discrimination complaints filed in 2000 was reported to us as 79, but the number reported to EEOC was 75. After we pointed out some problems with the data, OCR manually reviewed source documents and revised these numbers. We did not verify the accuracy of the revised numbers because doing so would have required considerable effort to reconstruct all the data. To determine if the numbers provided for complaints on hand, new, closed, and ending were supportable, we reviewed the information EPA reconstructed, including handwritten notes. We also selected a number of supporting documents for review and found that the data reported agreed with the supporting documentation. These documents were also reviewed to determine if the numbers of complaints reported to us matched those reported to EEOC. Although we believe the reconstructed numbers are indicative of the situation at EPA, we cannot attest to the overall accuracy of these data. Table 1 shows the number of complaints on hand at the start of the year and the number of new, closed, and on hand at the end of the year for fiscal years 1995 through 2002 as reported to EEOC. The number of complaints closed fluctuated from a low of 44 in 1999 to a high of 123 in 2001. For fiscal years 1995 through 2002, a total of 548 people filed 679 complaints. The number of discrimination complainants is usually less than the number of complaints filed because more than one complaint can be made by a complainant. As table 2 shows, the number of complainants and discrimination complaints filed spiked in fiscal years 1998 and 2002. OCR officials could not provide any explanation for the increased complainants and complaints filed in these years. The agency closed 588 complaints during this period, including 125 dismissals; 48 withdrawals; 222 agency decisions, none of which found for the complainant; and 178 settlements. Settlements represented 30 percent of all discrimination complaints closed over the period. In each year from fiscal year 1996 to 2000, the number of cases settled at the agency numbered less than 20, while 54 cases were settled in 2001. These settlements represented 44 percent of all discrimination complaint cases closed in 2001. According to agency officials, a number of settlements were reached during 2001 as part of an effort to eliminate the large number of backlogged complaints. Settlements can be achieved by different methods. For example, for the years 1996 through 2001, a total of 29 discrimination complaint cases were settled at the EEOC hearing stage while another 7 cases were settled while pending before federal district courts. Beginning in 2000, as required by EEOC, EPA began a program to make Alternative Dispute Resolution (ADR) available in precomplaint and formal complaint processes. The agency uses mediation as its alternative method to resolve EEO complaints and administrative grievances. During the first 6 months of fiscal year 2003, there were 18 requests for mediation, of which 14 EEO cases were accepted for mediation, 1 case is under review, and 3 cases are pending further action. The data showed that headquarters discrimination complaints focused mainly on race, reprisal, gender, and age. The specific issues addressed in these complaints were non-selection for promotion, appraisal, and harassment. Similarly, in regional offices the most often cited bases for discrimination complaints were race, reprisal, and gender. The specific issues most cited in the regional complaints were non-selection for promotion, appraisal, harassment, and time and attendance. Table 3 lists the percentages of complaints by the bases of complaint. Table 4 lists the percentages of complaints by the issues of the complaint. EPA takes a long time to process complaints. Over the fiscal years 1995- 2002 period, it took an average of 663 days from the time a complaint was filed until it was closed. A major contributing factor to this lengthy process was the time used to investigate complaints. Over the same 8-year period, the average time to complete an investigation was 465 days. EEOC regulations require EPA and other agencies to complete investigations within 180 days of receiving discrimination complaints unless the period is extended. In 2002, the average number of days for completed investigations was 427 days in comparison to the 180-day standard. Discrimination complaint cases closed in 2002 took an average 839 days to process. When compared to the other 23 agencies that are required to comply with the CFO Act, EPA’s total number of days to process a complaint from filing to closing ranked fifth highest in 2002. EPA is taking steps to improve data system reliability. It contracted with a company to procure an EEO data system and to train employees on how to use the new software program. This software (EEO-Net) is designed to automate data entry, case tracking, and reporting requirements. The procurement process began in February 2002, and it was originally estimated that the new system would be in place and fully operational in June 2002. An EPA official told us that the EEO-NET system became operational on January 15, 2003. OCR is depending on this new system to alleviate many of the inaccuracies and inconsistency problems with discrimination complaint data. Its implementation is also expected to permit identification of trends, to alert both regional and headquarters staff members of problem areas, and to serve as an early warning system. According to EPA officials, the new system is expected to automatically and accurately generate data for completing EEOC’s Annual Federal Equal Employment Opportunity Statistical Report of Discrimination Complaints. The Air Force has successfully used the EEO-Net software program for over 3 years for military personnel and is installing the program for use with its civilian workforce. Officials at the National Labor Relations Board, Broadcast Board of Governors, Government Printing Office, and EEOC have all recently installed the system and are pleased with the results thus far. As discussed previously, data in the old system were not accurately entered, updated, or maintained by EEO specialists. In an interim effort to resolve these data problems, OCR hired a person whose responsibilities include entering, updating, and maintaining the data. OCR is also developing new performance standards for EEO specialists that rate them on inputting and maintaining the data. The new performance standards are intended to ensure that the data problems do not occur again. Specialists are to be held accountable for maintaining accurate discrimination complaint data as part of their assigned duties. According to OCR officials, EPA has never adopted standard operating procedures for processing internal complaints of discrimination, but it developed draft procedures in July 2001. Although these procedures are in draft form, OCR’s staff uses them as guidance. EPA officials said they were waiting until the EEO-Net software is fully operational to finalize the standard operating procedures. The system became operational in January 2003, but as of May, the procedures were still in draft form. The draft standard operating procedures provide detailed step-by-step instructions for OCR’s staff to follow, from when a complaint is filed through final resolution. For example, Section II,”Checklist for Preparing Correspondence,” includes instructions on when and how to prepare mailings related to discrimination complaints. Section IV of the procedures addresses the steps necessary for OCR to process individual complaints, including steps to follow upon complaint receipt, complaint acknowledgment, request for EEO Counselor’s Report, and all subsequent steps of the process up to the complaint’s resolution at the formal stage. The draft standard operating procedures also identify data that can be used by OCR for trend analysis and address management and tracking of counselor assignments. OCR’s staffing has increased from four to nine in the past 8 years, and the office plans to hire additional staff members. (See table 3.) EEOC regulations require that agencies provide sufficient resources to their EEO programs to ensure efficient and successful operation. EPA’s 2001 Federal Managers’ Financial Integrity Act Report stated that EPA was unable to process complaints in a timely manner and identified this situation as a material weakness and an agency weakness. The most recent report states that OCR had hired additional staff members and made other changes, such as changing contactors who conduct investigations, and now believes it can ensure the timely processing of discrimination complaints and recommends that this material weakness be closed. OCR officials told us that additional staffing would help facilitate timely processing of discrimination complaints. In June 2002, they said that they had two vacancy announcements out to recruit an additional GS-13 Equal Employment Specialist to process complaints and one GS-14 Senior Equal Employment Specialist to develop final agency decisions, prepare appeal briefs, and process complex complaint cases. OCR is currently planning to fill only the GS-14 position and, as of May 2003, the selection process was still under way. In addition, OCR embarked on a training effort in 2001 to increase the numbers of collateral duty counselors. As a result, an additional 20 counselors were trained to serve as first points of contact for employees considering filing discrimination complaints. These counselors are not full- time. They perform counseling duties in addition to their other assigned duties. The EEO counselors’ responsibility is to ensure that complainants understand their rights and responsibilities under the EEO process. Specifically, the counselor must let the complainants know that they can opt for precomplaint resolution through participation in ADR or EEO counseling. Counselors also determine the claim and bases raised by the potential complaint, determine the complainant’s timeliness in contacting the counselor, and advise the complainant of the right to file a formal complaint if ADR or counseling fails to resolve the dispute. EPA has not processed complaints in a timely manner, and has had a long- standing backlog of overdue cases. The backlog was caused in part by problems with contractors that conducted investigations that did not meet evidence standards as outlined in EEOC regulations. According to OCR officials, some of the investigations performed by companies formerly used by the office failed to provide adequate factual records required by EEOC regulations. As a result, these inadequate investigations did not contain the facts needed, and the investigations were reassigned and redone resulting in more time added to complaint processing. Because of these problems with incomplete and poorly done investigations, OCR terminated contracts with certain investigative firms. In June 2002, OCR contracted with a new company to conduct discrimination complaint investigations. An OCR official told us that the company has demonstrated its ability to perform thorough and complete investigations that meet EEOC’s standards for investigations. OCR now contracts with six companies to investigate complaints and is satisfied overall with the investigations performed. Also, OCR’s draft standard operating procedures for processing complaints of discrimination require that, prior to starting an investigation, OCR provide each investigator a copy of its guidelines for conducting EEO investigations to ensure that investigators understand what is required of them. The office currently has a blanket purchase agreement in place to hire four additional companies to perform investigations. Because of the relatively recent start of the contract, an OCR official said that OCR did not have enough statistical data to evaluate contractor effectiveness. However, OCR said that the situation regarding investigations was satisfactory. In addition, EPA helped speed adjudication of backlogged cases by creating a special task team in May 2001. The initial focus of the team efforts was on the completion of investigations and preparation of final agency decisions on backlogged complaints. Officials provided a final report that discussed the team’s actions and how its stated mission was accomplished. At the beginning of the team’s work, 139 discrimination complaints were identified as active with investigations not completed for 180 days or more as of June 1, 2001. The report said that 45 reports of investigation were completed and 17 were drafted and were under review, 18 final agency decisions were issued and an additional 11 were drafted and under review, 10 cases were settled, 9 cases were withdrawn or dismissed, and 27 complainants had requested EEOC hearings. Only 12 of the 139 complaints were still waiting for completion of an investigation. In February 2002, OCR also selected a contractor to augment OCR’s staff by providing EEO counseling, performing EEO investigations, and writing draft agency decisions. All draft agency decisions written by the contractor are to be reviewed and revised, if necessary, by the Office of General Counsel. OCR officials said that OCR staff members are required to review draft decisions written by the contractor within 48 hours. EPA officials said that they hope this policy will help prevent discrimination complaint case backlogs from occurring as they had in the past. Moreover, OCR says it now works during the early stages of the complaint process to move discrimination complaints to the ADR process, as appropriate. If ADR is successful, this can obviate the need for investigations. In the event that a manager or employee is formally found to have discriminated, EPA is supposed to determine on a case-by-case basis whether individual employees should be disciplined. However, EPA does not have a process in place to review discrimination complaint settlements to determine if any manager or employee has participated in improper conduct and should be disciplined. Agency officials said that settlements are no fault, and in settlements no one admits to any wrongdoing and no process is in place to make such determinations. We recognize that EEO complaints can be settled without there having been discriminatory conduct involved in the case. For example, an employee who is not promoted may believe the reason was because of his or her race and file an EEO complaint on this basis. When the case is reviewed the agency could find that while race was not a factor, the manager did not adhere to other requirements of the merit promotion system. As a result, the agency could settle the complaint by agreeing to recompete the promotion and ensure that all rules are followed and that the complainant would receive fair consideration in the recompetition. However, the possibility of settlements not being related to discriminatory conduct does not alter the fact that not having a process to determine whether discrimination was involved means that any settlements involving discrimination may not be identified as such. EPA officials said that they provide managers the opportunity to change their behavior through training rather than taking disciplinary action. For example, in 2001 senior agency officials expressed concerns about managers’ conduct and their compliance with Title VII of the Civil Rights Act of 1964, as amended. These concerns led to a contract with EEOC to conduct a 2-day mandatory training program for all 1,600 EPA managers in June 2002. EPA officials said that the training has improved managers’ interaction with employees. However, it is unclear whether the improved management interaction with employees will result in fewer discrimination complaint filings. Officials also said that the agency has EEO performance standards for Senior Executive Service managers. Managers are evaluated according to their efforts to support EEO and fairness as part of the process for determining who gets awards. In addition, since 2001 EPA has required all employees to sign statements acknowledging the agency’s zero-tolerance policy towards discrimination or harassment by managers, supervisors, or employees. Accountability is a cornerstone of results-oriented management. Because EPA’s managers set the conditions and terms of work, they should be accountable for providing fair and equitable workplaces, free of discrimination and reprisal. If EPA’s managers are not held accountable for their actions in cases in which discrimination has occurred, employees may not have confidence in the agency’s EEO disciplinary process, and employees may be unwilling to report cases of discrimination. Further, our past work has found that agencies that promote and achieve a diverse workplace attract and retain high-quality employees. For public organizations, this translates into effective delivery of essential services to communities with diverse needs. Leading organizations understand that they must support their employees in learning how to effectively interact with and manage people in a diverse work place. Fostering an environment that is responsive to the needs of diverse groups of employees requires identification of opportunities to train managers in techniques that create a work environment that maximizes the ability of all employees to fully contribute to the organization’s mission. A high-performing agency maintains an inclusive workplace in which perceptions of unfairness are minimized and workplace disputes are resolved by fair and efficient means. One way to foster openness and trust by employees is to have in place systems that hold employees responsible for discriminatory actions. Agriculture Process: In February 2003, EEOC issued a report on Agriculture’s EEO program. In this report, EEOC applauded Agriculture for “holding managers accountable for their actions and disciplining them where appropriate.” Since January 1998, Agriculture has reviewed cases in which discrimination was found or in which there were settlement agreements to determine if employees should be disciplined. The agency’s regulations state that managers, supervisors, and other employees are to be held accountable for discrimination, civil rights violations, and related misconduct, as well as for ensuring that Agriculture’s customers and employees are treated fairly and equitably. Agriculture agencies are to take appropriate corrective or disciplinary action, such as reprimands, suspensions, reductions in grade and pay, or removal. Final decisions containing a finding of discrimination and settlement and conciliation agreements are referred to the agency’s Human Resources Management Office for appropriate action. This office monitors corrective and disciplinary actions taken in EEO and program discrimination matters. As a result of its process, Agriculture has taken over 200 corrective and disciplinary actions against managers and other employees since 1998, including removals, suspensions, and letters of reprimand. IRS Process: IRS offers another example of an agency process to review settled EEO complaints to assess whether employees should be held accountable. Since July 1998, IRS has been reviewing cases in which discrimination was found or in which there were settlement agreements to determine if the discrimination was intentional. Where an employee has been found to have discriminated against another employee (or against a taxpayer or a taxpayer’s representative), the Internal Revenue Service Restructuring and Reform Act of 1998 provides that the individual be terminated (Pub. L.105-206, Section 1203, July 22, 1998). Only the IRS Commissioner has the authority to reduce termination to a lesser penalty. If there is a finding of discrimination, a settlement agreement is reached, or EEO issues are raised during the negotiated grievance process, IRS’s Office of Labor Relations refers the matter to the National Director, EEO Diversity, Discrimination Complaint Review Unit. Local and headquarters EEO offices can also refer cases to the unit. This review is designed to alert management of any EEO-related misconduct regardless of the formal pursuit of a remedy by an employee. When it receives a case, the unit determines whether formal review and fact-finding is required before making a decision. If so, the case file is forwarded to the Department of the Treasury’s Inspector General for Tax Administration, with a copy of the allegation referral form to Labor Relations. Formal reviews are to be completed within 60 days. Labor Relations coordinates with the head of the involved office if the unit finds no potential violations. The office head is responsible for determining the appropriate administrative disposition. The office conducts a limited review of referred cases at the precomplaint stage; after a formal complaint, formal withdrawal, or lapsed case due to employee inaction; or if there was no finding of discrimination. This review makes management aware of any EEO-related misconduct regardless of the formal remedy sought by an employee. Besides not having a process to determine whether managers discriminated in settled cases, EPA does not have a process to track or routinely report data on disciplinary actions taken against managers for discrimination or other types of misconduct. Data of this nature are important because they can be a starting point for agency decision makers to understand the nature and scope of issues in the workplace involving discrimination, reprisal, and other conflicts and problems, and can help in developing strategies for dealing with those issues. Under the No FEAR Act signed into law in May 2002, agencies are required to accumulate additional information about discrimination cases. The provisions of this act are to take effect October 1, 2003, and will require EPA to begin tracking and accumulating data on disciplinary actions resulting from discrimination. Specifically, the act requires that federal agencies file annual reports with Congress detailing, among other things, the number of discrimination or whistleblower cases filed with them, how the cases are resolved, and the number of agency employees disciplined for discrimination, retaliation, or harassment. These data requirements should alert agencies and employees that they are accountable for their actions in cases involving discrimination, retaliation, or harassment. This legislation demonstrates Congress’s high level of interest in discouraging discriminatory conduct and reprisal at federal agencies and the need for managers to be held accountable for such conduct. EPA did not have accurate data on the numbers and types of discrimination complaints made by its employees, and this in turn made discerning trends in workplace conflicts, understanding the sources of conflict, and planning corrective actions difficult. These types of data are useful in helping to measure an agency’s success in adhering to merit system principles, treating its people fairly and equitably, and achieving a diverse and inclusive workforce. Having a data software system that can track cases and provide EEO managers with the information needed to discern trends to enable the development of policies is critical. EPA is relying on its newly procured EEO data system to overcome its data accumulation and reporting problems. Moreover, the agency is relying on that system to provide it the capability to track cases and identify trends that may indicate problems areas. This, in turn, illustrates the importance of the new system’s effective operation. EPA has never had standard operating procedures for EEO complaint processing and has been using draft procedures prepared in July 2001. The agency should finalize the draft procedures to help ensure that OCR staff members know what they are to do and that a uniform process is used nationwide. EPA does not have a process to determine whether managers should be disciplined for their actions in settled EEO complaint cases. If agency employees have the impression that EPA’s discrimination complaint process does not discipline managers who participate in discriminatory conduct, employees may be less willing to participate in the process. Employees are less likely to file discrimination complaints if they perceive that there is no benefit from doing so or if they fear reprisal. A specific process that holds managers accountable for discriminatory conduct may enhance employee confidence in the EEO environment and demonstrate the agency’s commitment to providing a fair and discrimination free environment. We recommend that the EPA Administrator direct that OCR evaluate its new EEO software system to ensure it resulted in a reliable system for tracking cases and accumulating accountability data for EEOC. In addition, the Administrator should direct that the draft standard operating procedures for handling EEO complaints be finalized. The Administrator should also direct that a process be developed that assesses every case in which discrimination is found or allegations of discrimination are settled to determine whether managers, or other employees, should be disciplined. In a June 11, 2003, letter (see app. I), the Director of EPA’s Office of Civil Rights commented on a draft of this report. EPA generally agreed with the report’s findings. EPA said that the report shows that the agency has made considerable progress in addressing the backlog of cases involving alleged discrimination and that it believes it has in place the procedures and resources to ensure that current and future complaints are timely processed. EPA’s comments did not mention our recommendation to evaluate its new EEO software system to ensure that it meets the agency’s need to track cases and accumulate accountability data. The comments also did not address our second recommendation about finalizing standard operating procedures for handling EEO complaints that have been in draft for 2 years and would be EPA’s first set of official procedures. As we discussed in the report, action on both of these recommendations is important to assuring an effective EEO assurance program at EPA. Regarding the recommendation to establish a process to assess whether managers or other employees should be disciplined in cases in which discrimination is found or allegations are settled, EPA said that it would develop policies and procedures that will allow it to address effectively the issue of disciplinary action against any manager or employee found to have discriminated. This action should, when completed, address the part of the recommendation related to disciplinary action when discrimination has been found. However, it does not address the part of the recommendation dealing with the need to assess whether disciplinary action should be taken in cases where allegations of discrimination are settled. As discussed above, a process that holds managers accountable for discriminatory conduct should enhance employee confidence in the EEO environment and demonstrate the agency’s commitment to providing a fair and discrimination free environment. EPA also made several technical comments, which we incorporated in the report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days after its date. At that time, we will send copies to the Administrator of EPA, and interested committees and members of Congress. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you have questions, please contact me on (202) 512-6082 or at rezendesv@gao.gov or contact Thomas Dowdal, Assistant Director, at (202) 512-6588 or dowdalt@gao.gov. Jeffery Bass, Karin Fangman, and Anthony Lofaro made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Minority employees at the EPA reported for a number of years that the agency had discriminated against them based on their race and retaliated against them for filing complaints. These issues were aired at hearings held by the House Committee on Science at which EPA said it would take actions to ensure a fair and discrimination free workplace. GAO was asked to review (1) the accuracy of EPA's equal employment opportunity (EEO) data, (2) various issues about the processes used to resolve discrimination complaints, and (3) the disciplinary actions taken for managers who discriminate. EPA had difficulty providing accurate EEO data because of a data system that the agency believes was unreliable and was further compromised by data entry problems. When GAO identified problems with the information EPA provided, the agency manually reconstructed data for fiscal years 1995 through 2002. The reconstructed data indicate that during this period 548 EPA employees filed 679 discrimination complaints, and the agency closed 588 complaints. Complaints were closed with 125 dismissals, 48 withdrawals, 178 settlements, 5 remands, and 222 agency decisions not supporting the claimant. GAO cannot attest to the accuracy of these numbers but believes they are indicative of the situation at EPA. EPA recently procured new software to facilitate accurate tracking and reporting of EEO information and believes the software will rectify data problems. EPA has never had official standard operating procedures for complaint processing, which are required by regulation. Rather, EPA said that complaints were processed under general guidance provided by the Equal Employment Opportunity Commission (EEOC) until draft procedures, prepared in July 2001, were put into use. EPA has taken a long time to process discrimination complaints with cases averaging 650 days from filing to closing over fiscal years 1995-2002. A major contributing factor was that investigations, which are supposed to be done in 180 days, averaged a total of 465 days. The firms used by EPA failed to conduct thorough investigations and their reports did not provide complete or factual accounts of the incidents leading to the complaints. As a result, investigations often had to be redone, adding to the amount of time needed to complete them. Over the last year, EPA has discontinued the use of these firms and contracted with new ones that it believes are doing a much better job. EPA has also increased its own staffing for EEO matters to try to reduce processing times. EPA does not have a specific process for determining whether managers involved in discrimination complaints did in fact discriminate and if so whether managers should be disciplined. EPA officials told us that they have relied on training to rectify and prevent discriminatory conduct. Other agencies have formal processes to evaluate each case in which discrimination is found or a complaint is settled to determine whether discipline is warranted. EPA will be required to collect and report the number of agency employees disciplined for discrimination or harassment under the provisions of the Notification and Federal Employee Anti- Discrimination and Retaliation Act, effective in October 2003. A process like those in place at other agencies should also help EPA meet this requirement.
The federal government’s civilian real-property holdings include thousands of leased office buildings and warehouses across the country that cost billions of dollars annually to rent, operate, and maintain. As the federal government’s principal landlord, GSA acquires, manages, and disposes of real property on behalf of many civilian federal tenants. In this role, GSA is responsible for executing, renewing, and terminating contracts for leased properties. As of fiscal year 2014, GSA leased 6,444 office and 415 warehouse spaces (totaling 187.6-million rentable square feet) from the private sector. However, there are some federal entities that have independent statutory leasing authority, which refers to the authority to lease space independently of GSA. Congress provides this authority in law through a federal entity’s enabling legislation or through annual appropriations acts. Figure 1 illustrates examples of the types of spaces that federal entities lease directly from private owners. Within the executive branch, the OMB and GSA provide leadership for the management of federal real property. As the chief management office for the executive branch, OMB is responsible for oversight of how agencies devise, implement, manage, and evaluate programs and policies. For real property management, OMB develops and provides direction to executive branch agencies and is responsible for reviewing their progress. GSA has two key leadership responsibilities related to real property management for the federal government. First, GSA’s Public Buildings Service functions as the federal government’s landlord, as described above. Second, GSA’s Office of Government-wide Policy is tasked, among other things, to identify, evaluate, and promote best practices to improve the efficiency of management processes. To promote the efficient and economical use of federal government real property, in 2004 the President issued Executive Order 13327 establishing the Federal Real Property Council (FRPC) composed of senior management officials from specified executive branch departments and agencies covered by the Chief Financial Officers Act of 1990 (CFO Act), including GSA, and chaired by OMB. The executive order established FRPC with the goals of developing guidance, facilitating the implementation of agencies’ asset-management plans, and serving as a clearinghouse for leading practices. The executive order also directed GSA, in consultation with the FRPC, to establish and maintain a single, comprehensive database describing the nature, use, and extent of all real property under the custody and control of executive branch agencies. To meet this directive, GSA’s Office of Government-wide Policy established the Federal Real Property Profile (FRPP) to be a government-wide real property inventory database. It also provided guidance to the FRPC member agencies about how to annually report data on real property under their custody and control for inclusion in the FRPP. Since the formation of the FRPP, only the FRPC member agencies have been required to annually submit their real property information to the database. According to GSA officials and OMB staff, other unspecified federal entities may voluntarily submit data to the FRPP. In recent years, OMB has also undertaken several initiatives and issued guidance to federal entities to improve federal real-property management, specifically targeting offices and warehouses. In May 2012, OMB issued a memorandum directing FRPC member agencies not to increase the size of their civilian real-estate inventory, stating that any increases in an agency’s total square footage of civilian real property must be offset through consolidation, co-location, or disposal of space from the inventory of that agency. In March 2013, OMB issued another memorandum establishing implementation procedures for this policy, called Freeze the Footprint. This memorandum clarified that agencies were not to increase the total square footage of their domestic office and warehouse inventory compared to a fiscal year 2012 FRPP baseline. Most recently, in March 2015, OMB issued its National Strategy for the Efficient Use of Real Property (National Strategy). The National Strategy employs a three-step policy framework to improve the cost effectiveness and efficiency of the federal real property portfolio: (1) freeze growth in the portfolio; (2) measure the cost and utilization of real property assets to provide performance information and support more efficient use; and (3) reduce the size of the portfolio through asset consolidation, co-location, and disposal. To assist with the third step, OMB issued the Reduce the Footprint policy that requires agencies to develop a Real Property Efficiency Plan, which among other things, describes an agency’s overall approach to managing real property and establishes reduction targets for office and warehouse space, disposal of owned buildings, and the adoption of design standards to optimize owned and leased domestic office space usage. Only FRPC members are required to participate in these government-wide real property reform efforts. As a result of these efforts, we acknowledged in the February 2015 update to our High-Risk Series that the federal government has demonstrated a high level of leadership commitment to improving real- property data to support decision making, and has made some progress in increasing its capacity to improve data reliability. In our previous assessment of the FRPP, we found that FRPP’s data can be used in a general sense to track assets to provide an overall perspective on FRPC members’ real property portfolios. However, we raised concerns about the quality of the FRPP data specifically for some of the key variables, including utilization, condition, and annual operating costs. As such, we recommended that GSA develop and implement a plan to improve the FRPP, so that the data collected are sufficiently complete, accurate, and consistent. Most recently, we found that certain key FRPP data elements, such as utilization and status, continue to be inconsistently collected and reported by agencies. We recommended that GSA make transparent through its FRPP documents how its mission to provide space to federal agencies affects the reporting of the utilization and status data elements in the FRPP. GSA has taken steps to improve the reliability of FRPP data, but those efforts are ongoing. According to GSA officials and OMB staff, neither GSA nor OMB maintains a comprehensive list of federal government entities with independent leasing authority and neither is required to do so. GSA prepared a partial list of 33 federal entities with independent leasing authority for its own purposes in 2009, but this list, according to GSA, was not intended to be a complete list of all entities. FRPP offers a possible way for determining which federal entities have independent leasing authority as it has a data field that indicates if a federal entity uses its own authority to lease real property. However, this information is incomplete as only agencies covered by the CFO Act, which are FRPC members, are required to annually submit their real property information to the FRPP. Federal entities outside FRPC membership can voluntarily submit real property data annually to FRPP, according to GSA officials and OMB staff, but few do. We found that three non-FRPC member entities reported to the FRPP in fiscal year 2014 that they independently leased office and warehouse spaces. Other non- FRPC member entities have submitted data to the FRPP inconsistently. For example, the U.S. Postal Service had previously chosen to submit some data to FRPP, but no longer does. The population of federal entities with independent leasing authority can vary over time. As such, any list that is not regularly updated may not be comprehensive because it captures a snapshot in time. Some changes that may have taken place are, for example, federal entities that may have acquired leasing authority since the list was initially compiled. Also, individual leasing authority is specific to each federal entity and may change over time. For example, legislation may be enacted to create a new federal entity or change the nature of an existing entity’s leasing authority—laws may place conditions on the leasing authority, such as specifying the location, type of space, or terms under which the entities can exercise this authority. A federal entity’s leasing authority can range from an agency-wide, general authority to acquire real property (e.g., U.S. Postal Service, Tennessee Valley Authority), to authority given to specific components of a department or an agency (e.g., FAA within the Department of Transportation), or for types of property (e.g., Commodity Credit Corporation within the Department of Agriculture has leasing authority for office space and storage space), or for a specific time frame. To obtain more complete and reliable information on federal entities with leasing authority, we administered a survey to 103 civilian executive branch agencies and other federal entities that met our selection criteria. Sixty of the entities self-identified as having independent authority to lease real property of which 52 reported having the authority to lease domestic office or warehouse space. Of these, 37 federal entities reported that they are using their authority to lease 944 offices (approximately 16.6-million rentable square feet of space and an annual rent of $556 million) and 164 warehouses (approximately 3.3-million rentable square feet of space and an annual rent of $39.8 million) as of October 1, 2015. The FAA alone leases 341 offices and 70 warehouses, which accounts for 36 percent and 43 percent of the total number of independently leased offices and warehouses. The remaining 36 entities use their authority to lease up to 91 offices and 36 warehouses. See table 6 and table 7 in appendix II for a full listing of the 52 federal entities that reported having independent leasing authority to lease office and warehouse space, legal citations for their authority, and summary statistics about the offices and warehouses they leased. Of the 37 federal entities that reported using their leasing authority to lease office or warehouse space, 18 also reported using GSA to lease a portion of their space. There are a variety of reasons why federal entities use GSA to lease on their behalf in certain circumstances and use their independent leasing authority in other situations. For example, NASA officials told us NASA is legally required to use GSA to lease office and warehouse space inside the National Capital Region, but may choose to use its own authority elsewhere. In contrast, according to FAA officials, the majority of FAA’s office spaces are leased independently, though they may use GSA when FAA does not have the staff resources available to administer the lease process or if private sector availability is limited in certain markets. They said that obtaining a lease using independent leasing authority can generally be completed more quickly than acquiring space through GSA. According to FAA officials, when faced with tighter timeframes, FAA can fulfill space needs in as little as 4 months, whereas it can take over a year to lease through GSA. In our survey, 25 federal entities that are not members of the FRPC reported having independent authority to lease offices and warehouses. Of those entities, 19 reported using their authority to lease domestic offices and warehouses. Combined, these 19 non-FRPC member entities reported that they leased 243 offices and warehouses (approximately 8.3- million rentable square feet of space and an annual rent of $303.4 million) as of October 1, 2015. Table 1 provides a listing of the 25 federal entities that reported having independent leasing authority and are not members of the FRPC, along with the amounts of domestic offices and warehouses they lease. To see more detailed information about these entities, see table 7 in appendix II. According to GSA’s Office of Government-wide Policy, the FRPC’s goals are that the FRPP database: leads to an increased level of agency accountability for asset management; allows comparing and benchmarking across various types of real gives decision makers, including Congress, OMB, and federal entities, accurate and reliable data needed to make asset management decisions, including disposing of unneeded properties, in one comprehensive database. However, the FRPP’s incomplete data set reduces its effectiveness as an oversight and accountability mechanism for entities with independent leasing authority. It is not a comprehensive database of all federal real property because, as noted previously, only the specified entities that are covered under the CFO Act and that are also the member agencies of the FRPC have been required to annually submit their real property data. Other entities with independent leasing authority may voluntarily report their data, according to GSA officials and OMB staff, but as discussed, few do. As a result, the scope of the real property portfolios of all other entities with independent leasing authority is largely unknown. For example, only two of the above identified 19 entities that reported using their authority to lease domestic offices and warehouses—the Smithsonian Institution and the Tennessee Valley Authority—voluntarily reported data on their independently leased offices and warehouses to the FRPP in fiscal year 2014. As such, the remaining 17 entities’ 172 independently leased offices and warehouses (approximately 6.6-million rentable square feet of space and an annual rent of $248.4 million) were not accounted for in the FRPP. The Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives. To do so, management should design a process to identify the information requirements needed to achieve its objectives. One key step is to ensure that quality information is obtained. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. Management should use quality information to make informed decisions and evaluate the entity’s performance in addressing key objectives and assessing risks. The gaps in FRPP’s data limit the effectiveness of FRPP as a decision-making tool for policy leaders because the data are not a complete and accurate representation of the entirety of the federal government’s real property holdings. According to OMB staff, the inclusion of data by federal entities not required by the CFO Act would help the FRPP to be more of a comprehensive database, but the staff also noted that the majority of real property is owned or leased by CFO Act agencies. According to these staff, the federal government has limited resources to provide technical assistance and training to entities regarding submitting data to the FRPP, therefore, including them would need to be accomplished in an efficient way. For example, GSA officials and OMB staff said that if more federal entities started submitting data to FRPP, it may require GSA to divert some attention from its current efforts to train those entities that have not contributed to the FRPP before. In our review, most of the federal entities that reported having independent leasing authority, outside the entities that are covered under the CFO Act, are members of the Small Agency Council. The Small Agency Council is a voluntary association of about 80 independent federal entities with generally fewer than 6,000 employees that represents the entities’ collective management interests. The Small Agency Council provides these smaller federal entities a line of communication with key decision makers, including OMB. GSA officials who manage the FRPP said that the Small Agency Council may be able to help coordinate its members’ involvement in the FRPP. For example, the Small Agency Council could provide technical assistance to help its members collect and submit their real-property data to the FRPP and facilitate the process on behalf of GSA and OMB. In our sample, the lease rates of most of the 37 independent leases we selected for review were less costly or comparable to matched GSA and private sector rates. However, these results varied by region: selected independent leases in the National Capital Region were generally less costly or comparable to matched private sector lease rates, but outside the National Capital Region, independent leases were more likely to be above the private sector market rate. While our selection of 37 leases is non-generalizable to the universe of all independent leases, they provide examples of how independent leases compare with GSA and private sector leases. We reviewed 37 selected independent leases across seven federal entities. Of these leases, 14 (38 percent) had rates that were less costly than matched GSA leases, and 11 (30 percent) had comparable rates. The remaining 12 leases (32 percent) had rates that cost more than matched GSA leases. Many selected independent leases were at or below matched GSA rates in the National Capital Region as well as in our other selected metropolitan areas. (See table 2.) Further, three of four federal entities with more than two leases had most of their lease rates comparable to or less costly than GSA rates. (See table 3 and fig. 2.) Of the leases we analyzed, more independent leases were less expensive than matched GSA leases. Based on our analysis and interviews, we identified several possible factors that could be influencing why some of the independent leases we analyzed were less expensive than matched GSA leases: Aspects unique to GSA leasing: Officials from GSA and two other federal entities said that aspects specific to GSA’s leasing process itself may contribute to the higher lease rates. Specifically, GSA officials stated that GSA uses standardized lease documents that include clauses that can be more rigorous than the leases provided by private sector landlords. For example, GSA leases could include clauses with higher— and thus possibly more expensive—energy conservation, security, and seismic safety requirements. The 37 selected independent leases we reviewed include a spectrum of different lease types, including leases that were provided by the landlord. GSA officials noted that GSA-provided leases shift more responsibility and risk to the landlord, and these requirements in GSA leases may increase lease rates. For example, private sector officials told us that GSA leases allow the government unilateral rights to substitute tenants, which increases landlord uncertainty. This uncertainty may result in less competition and higher rental payments. In addition, we previously found that GSA’s leasing process is lengthy and in some cases the process can take up to 8 years. This long process can result in potential landlords dropping out of the competition for GSA tenants, limiting competition and increasing rates. Tenant improvement allowances: We found tenant improvement allowances in about 64 percent of the matched GSA leases and in only about 43 percent of our analyzed independent leases. In a recent report, we also found that nearly 60 percent of over 4,000 GSA leases included tenant improvement allowances. GSA regional officials told us that tenants typically choose to amortize at least some of these tenant improvements over the term of the lease and ask landlords to assume the risk—GSA does so on behalf of its tenant agencies—of customizing a space according to specific requirements. Because landlords that lease to the federal government often assume this responsibility and obtain the resources required to construct, operate, and maintain real property over the course of its lifecycle, federal entities then pay private-sector interest rates as they pay for their improvements over the firm term of their GSA lease. Further, those independent leases that did include tenant improvements averaged $2.93 per square foot per year over the amortization period, while matched GSA leases averaged $4.01 ($1.08 or about 37 percent more), and this can contribute to more costly leases for the GSA. In one instance, an analysis of an independent lease and its matched GSA lease (the leases share the similar term length, approximate size, and are in nearby locations), showed that the GSA paid nearly $400,000 in tenant improvements, while its matched independent lease had no tenant improvement costs. Officials from two entities, including GSA, stated that GSA leases often include tenant improvement allowances. Periods of free rent: The independent leases we analyzed had periods of free rent built into the leases more frequently than the matched GSA leases. These periods of free rent reduce the average lease rate over the term of the lease. According to GSA officials, landlords typically prefer to provide periods free of rent instead of decreasing the rate charged per square foot. These officials said landlords want to keep that rate as high as possible for when their property is advertised to other potential tenants. Analysis of the selected independent leases and GSA leases showed that only two of 56 (4 percent) matched GSA leases had periods of free rent, while 14 of 37 (38 percent) of our selected independent leases had periods of free rent. On average, these 14 independent leases had periods of free rent for about 9 percent of the total duration of their tenancy. The majority of these leases’ periods of free rent were within the range of 4 to 6 percent of the total duration of the lease. These periods of free rent can have a large impact on total rent paid—for example, in one independent lease, the federal entity had a period of free rent for the first 5 months of its 10-year tenancy (4 percent of the total lease’s duration), resulting in about $2.8 million less paid over the term of the lease. Real estate brokers: Officials from one federal entity with independent leasing authority said that their use of private sector real-estate brokers with knowledge of particular market areas may have contributed to lower leasing rates. This entity’s officials said they often use brokers with robust knowledge of local markets to find the best properties that also meet their entity’s requirements. Additionally, officials from one entity said that when there are multiple offers, their broker would use the lowest offer as leverage to negotiate for lower rates with other potential landlords. As we previously reported, GSA does use private sector brokers for some leases, although the brokers are national in scope. GSA officials also told us that these brokers are limited in the extent they can use one offer as leverage to solicit lower offers from other bidders. Lastly, it is important to note that for both GSA and independent leases there are costs paid by the tenant outside of rent. For example, federal tenants pay GSA a fee for its services related to the leased space, and we found in January 2016 that GSA’s fee is between 5 and 7 percent. In contrast, an entity using independent leasing authority reported that there are administrative costs, such as paying its staff to perform various duties associated with the lease acquisition process. Most independent leases we reviewed (27 of 37) had rates that were comparable to or below matched private sector rates. (See fig. 3.) Across the different regions, however, there was variation. We looked at 19 leases in the National Capital Region and found that 16 (84 percent) were comparable to or less costly than matched private-sector lease rates. Only 3 (16 percent) of the leases in the National Capital Region were more costly. However, we looked at 18 leases in the other three metropolitan areas and found that 7 (39 percent) independent leases were more costly than matched private sector rates. (See table 4.) Outside of the National Capital Region, private sector real-estate professionals from a real estate firm said that a lower supply of office space coupled with the federal government’s smaller market share may result in higher lease rates. These professionals said that the federal government has the largest market share in the National Capital Region and can therefore get favorable rates in the market. These professionals added that the region’s large supply of offices facilitates competition between landlords, which can result in more favorable lease rates for the federal government. GSA officials said that in other markets there sometimes is no suitable office space available. In several cases, a complete lack of office space required the government to instead lease retail space. Retail space can be more expensive than office space, and leasing retail space did result in higher rental payments in some instances. Additionally, GSA officials stated that although the government is highly reliable with rental payments and is often seen as a desirable tenant, landlords outside the National Capital Region might be uneasy and unfamiliar with working with the federal government. In a January 2016 report, we found that GSA was also able to secure leases in the National Capital Region generally at or below market rates. We previously found that policies and procedures should establish expectations for strategically planning acquisitions and managing the acquisition process. Failing to address these principles can contribute to missed opportunities to achieve savings, reduce administrative burdens, and improve acquisition outcomes. Federal internal control standards also state that documentation of transactions enables management to control operations and make decisions that can help the organization run efficiently. We reviewed the Federal Management Regulation, GSA policies and procedures, and other applicable documents to develop a list of leading practices that all federal entities should incorporate into their real property leasing functions to help ensure that mission needs are met in a cost-effective and transparent manner. (See fig. 4.) In addition to leading practices, federal entities must follow all applicable laws related to real property management. For example, the recording statute requires that federal entities record the full amount of their contractual liabilities, including for leases, against funds available at the time the contract is executed. Previously, we found that two federal entities with independent leasing authority—the Commodity Futures Trading Commission and the Securities and Exchange Commission—did not fully comply with the recording statute in the way they recorded the lease obligations against funds available at the time the leases were executed. Violations of the recording statute such as these can also result in Antideficiency Act violations if lease obligations exceed available budget authority at the time the lease is executed. When the FRPC was created, it was envisioned to be an interagency collaboration of senior management officials who would develop guidance, facilitate the efforts of the members’ overall management of federal real property, and serve as a clearinghouse for leading practices. As noted previously, the FRPC is composed of senior management officials from the specified agencies covered by the CFO Act, including GSA and chaired by OMB. OMB staff told us that FRPC currently meets on a monthly basis and provides a forum to discuss, among other things, current real-property management policies and their implementation, member agencies’ leading practices related to real property management, and technical issues associated with data reporting. OMB staff said that the FRPC has been instrumental in improving the management of federal real property and serving as a forum for disseminating related guidance on government-wide initiatives, such as the National Strategy for the Efficient Use of Real Property (National Strategy) and Reduce the Footprint. Our body of work has shown the importance and value of interagency collaboration. Federal entities can enhance and sustain their coordinated efforts by engaging in key practices, such as defining and articulating a common outcome and developing mechanisms to monitor, evaluate, and report on results. However, successful collaboration benefits from having certain key features, such as including all relevant participants. But there are currently limited opportunities for non-FRPC member entities to participate in federal coordination on real property management issues and the sharing of leading practices that occurs in the FRPC. According to OMB staff, increasing the membership of FRPC to include all executive branch federal entities would essentially double the size of the group, which would make it more difficult to manage. OMB staff said that although their focus has been on the current FRPC member entities, which lease and hold the majority of federal real property, any land- holding federal entity would likely benefit from the collaboration and leading practices shared through FRPC. Currently, the guidance, resources, and leading practices shared during FRPC meetings are not publicly available, but may be requested by non-FRPC member entities. As previously mentioned, non-FRPC members are also not required to participate in the National Strategy and Reduce the Footprint initiatives; however, OMB staff said that they would support and assist any non- FRPC member federal entity in complying with the policies, if asked. The Small Agency Council may also be in a good position to efficiently coordinate with smaller federal entities and share relevant leading practices, such as those shared with FRPC member entities. An official from the Small Agency Council said that such a coordinating role would fit within the Council’s mission. Six of our eight selected federal entities have policies that generally aligned with leading government practices; two entities did not have any documented leasing policies. The two entities without leasing policies (NCUA and PBGC) are non-FRPC member entities. However, the six entities with leasing policies (five of which are FRPC-member agencies and one, FDIC, which is not) either have established their own policies, refer to the Federal Management Regulation, or GSA policies that include, among other things, space utilization standards, market research requirements, preferences for full competition, and evaluating leases as operating or capital leases. The following bullets illustrate how the policies of these six federal entities addressed each of the leading practices. Assess needs: All of the leasing policies we reviewed contained some element of assessing needs including varied space-utilization targets. Plan ahead: All of the leasing policies we reviewed contained an element of acquisition planning, including a requirement for establishing clear criteria for evaluating lease offers. Ensure best value: Some entities we reviewed had more guidance regarding advertising than others. For example, FAA policy referenced specific forms of advertising to consider, whereas USCG policy listed advertising as an example of the responsibilities of the real estate specialist, but does not specify how advertising should occur. NASA, NOAA, and USPTO’s leasing policies did not articulate strategies for advertising real-property leasing opportunities. Analyze and document the budget effects: Five of six policies we reviewed specifically mentioned evaluating leases as either operating leases or capital leases in accordance with OMB Circular A-11. Some budget scoring policies were more detailed. For example, while FAA policy includes a worksheet to score its leases, FDIC’s policy did not explicitly require that the lease’s budget-scoring evaluation be documented. Although most of our selected entities had established policies consistent with leading government practices, we found numerous instances where individual lease files lacked evidence to support that the leading practices were actually used. Federal internal control standards also state that documenting transactions enables management to control operations and make decisions that can help the organization run efficiently. To evaluate the extent that entities’ leasing practices aligned with leading government practices, we analyzed 30 selected lease files from 6 selected agencies. We looked for documentation of 10 sub-practices within the 4 leading practices. (See list of sub-practices in figure 5.) Evaluating lease file documentation across these entities with independent leasing authority is inherently challenging because each entity may have its own specific requirements of what should be documented in a lease file and for what length of time. As such, we were only able to evaluate what was provided to us, which may not fully reflect all the steps taken by the entity to execute the lease. As a result, if there was documented evidence that all of the sub-practices were included in the lease file, we had reasonable assurance that the four leading practices were implemented. However, we found the extent to which the lease files contained documented evidence that the sub-practices were being used varied. For example, the vast majority of our 30 selected lease files lacked evidence of (1) advertisement of need for space (88 percent); (2) determining if the lease qualifies as an operating lease or a capital lease (83 percent); (3) documenting factors used to evaluate offers (77 percent); and (4) documenting the time frame for acquiring space (77 percent). However, most of our selected lease files contained some documented evidence of specifying geographical area for needed space (80 percent) and conducting market research (53 percent). (See fig. 5.) The extent to which lease files documented alignment with the leading practices varied by entity, but none of the lease files contained evidence of full alignment with all the leading practices. For three of the six entities, all of the lease files were in partial alignment with leading practices. Two entities had some lease files that were in partial alignment and others that were not in alignment with leading practices. We were unable to determine whether the one selected USPTO lease file was in alignment with leading practices because the entity was unable to provide required documentation. (See table 5.) Without documented evidence in the lease files that all of the leading practices and sub-practices were used in the acquisition of leased space, it is difficult to determine whether the federal entities performed the leading practices that would help them achieve the best lease rate in a competitive and transparent manner. All selected entities we reviewed leased more office space per employee on average than GSA’s recommended target. According to GSA officials, GSA recommends that federal entities allocate approximately 150 rentable square feet per employee for office space. The 30 selected leases we reviewed averaged more than double the GSA recommended target per employee. Figure 6 below illustrates the comparison of the average rentable square feet per employee for our selected leases (grouped by entity) to the GSA target. In most of the 30 selected leases we reviewed, more office space was leased per employee on average than the GSA recommended target of 150 rentable square feet per employee. Twenty-eight of the 30 selected office leases analyzed for space allocation exceeded the GSA target, and two offices met the target. Both of the offices that met GSA’s recommended target were FAA offices. Space allocations greater than the GSA’s recommended target do not necessarily equate to larger offices. For example, we visited all of our selected independently-leased offices and found many vacant office spaces which can inflate the per-employee space allocation. Figure 7 illustrates some of the vacant spaces we observed at the selected lease sites. NCUA officials told us that its number of employees at one of its locations is lower than it was when the lease was signed due to a staff reorganization. Additionally, an FDIC official told us that the sizes of the offices at one location were likely larger than the entity’s design standards because FDIC accepted the space as it was, without significantly reconfiguring it. Officials from FAA and FDIC also described actions they are taking to increase space utilization. For example, FAA officials told us that the staff at one of our selected office locations recently reduced from 13 to 5. In response to the recent downsizing, FAA officials said they are planning to reduce the office space by returning some space to the landlord. Officials at FDIC told us they had to temporarily increase the amount of space at one location from four floors to six floors when they brought on additional staff a few years ago in response to the financial crisis. In March 2015, after its staffing levels had decreased, FDIC exercised its right to terminate the lease of four of the six floors it was leasing to account for reduced staffing levels and currently occupies two floors of that building. In addition, as part of the 2015 National Strategy, FRPC member entities are required to, among other things, establish space reduction targets for domestic office and warehouse space using FRPP data, and issue entity- specific maximums for usable square feet per workstation for leased domestic office space. OMB staff said that FRPC has created an effective forum for promoting more efficient space-use standards that are key to the success of the National Strategy. However, the National Strategy does not extend to non-FRPC members. Though not required, non-FRPC member entities may also benefit from the leading practices shared at the FRPC to modify their internal policies to meet utilization targets for the space they lease. Having complete and accurate data on the nature, use, and extent of federal real property assets is important for making informed decisions and identifying and addressing challenges. Currently, there is no mechanism to track federal entities with independent leasing authority and their real property assets, resulting in an information gap that can lead to a lack of oversight and accountability. This is particularly true for the 25 federal entities identified through our survey that are outside the FRPC’s membership. FRPP was created to be a single, comprehensive database of the federal government’s real property assets, but it cannot achieve that without data from all federal entities, including those with independent leasing authority. Currently, most of the 25 federal entities that are outside the FRPC’s membership do not report their real property data to FRPP. FRPC members coordinate efforts and share leading practices, and OMB staff said that the FRPC has been critical to improving real property management since its creation through executive order in 2004. However, FRPC’s membership has remained limited to agencies covered by the CFO Act. We found that for the selected entities we reviewed, the FRPC member entities were more likely to have leasing policies that aligned with leading practices than the entities that were not FRPC members. Increasing FRPC participation would allow all federal entities to benefit from the collaboration and sharing of leading practices and increase the completeness of the FRPP. However, despite the benefits to any federal entity with a real property program, OMB staff noted that these federal entities represent a small overall share of the federal government’s portfolio and that there would be administrative challenges associated with increasing the membership of the FRPC. OMB staff continued by saying that increasing participation would need to be done efficiently so that the FRPC and FRPP remain manageable and the effort does not draw from other critical efforts. The Small Agency Council already coordinates with OMB on other policy matters and may offer an efficient way to increase involvement in FRPC and FRPP. To increase the completeness of information on the federal government’s real property holdings and improve the coordination among federal entities that lease real property, we recommend that the Deputy Director of the OMB—as chair of the FRPC—establish efficient methods for: including data from non-FRPC member entities in the FRPP, and increasing collaboration between FRPC member and non-member entities, including sharing leading real-property management practices. We provided a draft of this report to OMB for review and comment. OMB concurred with our two recommendations and provided technical comments, which we incorporated, where appropriate. We also provided a draft of this report for review and comment to GSA and our eight selected federal entities: FAA, FDIC, NASA, NCUA, NOAA, PBGC, USCG, and USPTO. FAA, FDIC, NASA, NOAA, and USCG did not have any comments on our draft report. GSA provided technical and clarifying comments to the findings of our draft report, which we incorporated where appropriate. GSA officials stated that they did not have enough information to validate our comparison of the 37 independent leases with matched GSA leases. However, GSA officials provided a number of potential reasons that may explain the differences in GSA and independent lease rates that we observed. These reasons included, among others, relatively higher energy conservation, security, and seismic safety requirements in GSA leases. We included a discussion of these requirements and their impact on lease rates in our report. GSA also observed that a key difference between independent leases and GSA leases is that the GSA supplies its own federal lease contract documents in lease transactions. GSA noted that using standardized federal lease documents helps GSA more accurately compare the costs of different offers, helps to satisfy budgetary scorekeeping criteria, and ensures that various requirements of federal law and regulation are complied with. Some federal entities with independent leasing authority supply their own lease documents. In our report, however, we stated that federal entities with independent leasing authority used a spectrum of lease types, which also include leases provided by the private sector landlord. GSA also provided an updated recommended target for office space utilization per employee of 150 rentable square feet that is in line with government-wide initiatives, such as the National Strategy and Reduce the Footprint, which are aimed at improving the efficiency of federal space. We incorporated changes throughout our report to reflect this updated target. NCUA did not have any technical comments, but provided a letter with additional information on NCUA’s office space requirements, which has been included in appendix III. NCUA noted that it is revising its leasing policies and plans on incorporating the leading government leasing practices provided in our report, as well as leading practices regarding space efficiency per employee, as appropriate. PBGC provided one technical comment, which we incorporated. USPTO provided technical comments in a letter, which has been included in appendix IV. USPTO disagreed with our finding that the selected USPTO lease we reviewed was not in alignment with leading practices. We determined that a lease file was not in alignment because we were not provided with evidence that the leading practices were being met. USPTO opined that the selected independent lease project was an anomaly because it was done in conjunction with a much larger GSA space acquisition. As a result, the required documents were comingled with GSA documents. In the letter, USPTO asserted that leading practices were followed for the selected USPTO lease; however, USPTO was not able to provide us with additional supporting documentation, noting that on June 13, 2016, USPTO requested GSA to undertake a search for relevant documentation. Without any supporting documentation, we were unable to determine if the USPTO lease file aligned with leading practices, and we reflected this in table 5 of the report. We are sending copies to the appropriate congressional committees and the Director of the OMB. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to determine: (1) what is known about which federal entities have independent leasing authority and their use of this authority to lease office and warehouse space, (2) how selected independent leases compare to General Services Administration (GSA) and private sector leases in terms of cost, and (3) to what extent selected federal entities with independent leasing authority have leasing policies and practices that align with leading government leasing practices. To determine what is known about which federal entities have independent leasing authority and use this authority to lease office and warehouse space, we administered a survey to 103 civilian federal entities. Federal entities may have independent leasing authority to lease other types of real property besides buildings such as offices and warehouses, for example land and other structures. There are also different types of building classifications that these entities may have authority to independently lease such as hospitals, laboratories, and prisons. However, for the purposes of our review, we focused on domestic offices and warehouses, which are the two primary types of federal real property that GSA generally leases and would be the most practical to try to compare across federal entities. Our survey universe included all federal entities identified in a prior GAO report on federally created entities such as executive departments, other executive branch entities, government corporations, and other federally established organizations in the executive branch that received an average of over $20 million in annual appropriations from fiscal years 2005 through 2008. We administered our survey and collected responses from October 2015 to April 2016, and received a response rate of 100 percent. To inform our survey, we conducted pretests with three of our selected federal entities with different sized leasing portfolios. The survey was a Microsoft Word form that was sent to designated contacts at each of the federal entities. The survey was used to first identify whether or not a federal entity reported having independent leasing authority. If the federal entity reported that it had independent leasing authority, it was also asked to complete additional survey questions to identify the authorizing legislation or appropriations act that granted it the authority to lease domestic office and warehouse space independently. The survey also asked the federal entity to complete additional descriptive information about the amount of domestic office and warehouse space and associated annual rent cost as of October 1, 2015, for the space that each entity independently leased. The majority of the information collected through the survey has been compiled and reported in appendix II. To assess how independent leases compare with GSA and private sector lease costs, we collected and analyzed lease data from selected federal entities that reported having independent leasing authority and contracted with a professional services real-estate firm to compare independent leases to GSA and private sector lease rates. The scope of this review includes information on independent leases, GSA leases, and private sector leases commencing between calendar years 2001 through 2015. We initially collected 43 independent leases from eight selected civilian federal entities with offices and warehouses located in four different U.S. metropolitan regions: Atlanta, Georgia; Los Angeles, California; Miami, Florida; and Washington, D.C. We selected these four metropolitan regions because of the availability of independently leased office and warehouse spaces. The eight civilian federal entities were the Federal Aviation Administration (FAA) within the Department of Transportation, the Federal Deposit Insurance Corporation (FDIC), the National Aeronautics and Space Administration (NASA), the National Credit Union Association (NCUA), the National Oceanic and Atmospheric Administration (NOAA) within the Department of Commerce, the Pension Benefit Guaranty Corporation (PBGC), the U.S. Coast Guard (USCG) within the Department of Homeland Security, and the U.S. Patent and Trademark Office (USPTO) within the Department of Commerce. We selected these federal entities using a variety of considerations based on professional judgment such as the type of entity, number of properties, square footage, and rent paid, among other factors. The selected leases encompass all independent leases present in each region from all eight entities except for selected FAA properties in the National Capital Region. We omitted FAA properties at Dulles International Airport, Reagan National Airport, and Baltimore Washington International Airport because of the relatively large representation of FAA leases already in our selection (15 of the 43 leases were from FAA) and resource limitations. While our selection of 43 leases from eight federal entities is non- generalizable to the universe of all independent leases, they serve as case studies and provide examples of how independent leases compare with GSA and private sector leases. In addition to reviewing the leases and additional supporting documentation from our selected federal entities, we also interviewed knowledgeable officials at the respective entities regarding the history and details of each individual lease. Of these 43 independent leases collected from the eight entities, we compared a subset of 37 leases from seven of the federal entities against GSA and private sector leases. We excluded four leases on the basis that they did not require the tenant to pay any rent, and excluded two additional leases because leases of comparable building finish quality could not be identified in their respective market. We extracted specific cost-affecting data elements identified by the contractor from each of the 37 leases to the extent they were available within the lease documents, as applicable. Specifically, we extracted data for the following parameters: building type (i.e., office or warehouse), soft and/or firm lease term periods, annual rent cost, the lease’s start date, any rental abatement periods, any renewal options, lease holdover rates, the tenant’s ability to sublet (assignment), and availability of parking spaces and their associated cost. lease address, rentable square footage of the leased space, total square footage of the building, the tenant’s share of operating expenses, Other parameters of the leases that had implications on cost were also included—such as free shuttle services, the tenant’s right of first refusal, or the tenant’s ability to reduce its square footage—if present. One team member extracted the above parameters for each of the 37 leases, and the information was independently reviewed by another team member to ensure for consistency and accuracy. The contractor used GSA’s lease data from 2001 through 2016. The contractor reviewed the data provided by GSA for inconsistences, and we discussed the data in interviews with knowledgeable federal entity officials to assess the appropriateness of the data’s use. We also reviewed the contractor’s report, including its steps taken to assess the reliability of the GSA data used and the types of analysis conducted, and found them to be sufficiently reliable for the purpose of comparing independent lease rates to GSA lease rates. The contractor subsequently performed the following steps to find comparable GSA lease matches for each of the 37 independent leases. For each independent lease, the contractor: filtered GSA leases that were potential matches for only those that shared the same building class and building type (i.e., office or warehouse); filtered leases for those that started in the same year, or one year before or after the starting year; filtered the remaining GSA leases for only those leases in the same metropolitan region, opting for leases in the same submarket, if available; and filtered the results so that only GSA leases within 30 to 50 percent of the square footage of the independent lease remained. In instances in which there were not sufficient GSA leases within that range of square feet, the contractor appraised each potential comparable GSA lease on an individual basis, and leveraged its staffs’ professional discretion to determine if these potential GSA spaces could have been considered when the independent lease was being negotiated. This process resulted in each of the 37 selected independent leases being matched to at least one GSA lease, with up to five matches in some cases. The contractor then analyzed the matched leases’ rates. In order to compare leases with different terms and sizes, the contractor calculated the net present value of the rent per square foot of the entirety of the GSA leases’ terms. This net present value analysis used the Office of Management and Budget’s (OMB) nominal 10-year discount rate of 2.9 percent for calendar year 2016. In instances where there was only one GSA lease available, the contractor compared the direct percentage difference between the rent in costs per square foot. For instances where there were more than one sufficiently comparable GSA lease, the contractor used the weighted average of the GSA leases’ rents per square foot net present value to calculate the percentage difference. For this analysis, the contractor determined that a 10 percent variation above or below the independent lease’s rate was the range for which the GSA lease(s) were considered equivalent to the independent lease’s rate. We allowed this range to account for variations caused by the specific circumstances and unique features of each lease transaction. Otherwise, we considered independent leases beyond the 10 percent comparable range to be either more costly or less costly than its matched lease. Rent for all the leases included the shell rent, the amortized tenant- improvement costs—both general and custom—and the operating costs paid by the tenant. GSA’s fee of 5 to 7 percent that it charges to tenant federal entities for its services related to the leased space—that we identified in a prior report—was not included in this analysis of rental rates. For the comparison of independent leases to private sector leases, the contractor collected the following data on private sector lease markets: Private-sector lease rate data for the four markets were collected from quarterly and annual market reports that included rental rates from national and regional brokerage companies from 2001 through 2015. For the operating expenses of these office buildings, the contractor reviewed line item breakdowns of central business districts’ and regional submarkets’ expenses in their respective markets from the Institute of Real Estate Management’s database. To generally assess the appropriateness of these private-sector data, the contractor reviewed brokerage reports from several sources for each market and conducted interviews with 35 nationwide private- sector real-estate brokers using semi-structured interviews, which addressed topics such as typical term lengths, market practices, and responsible parties for various expenses and service fees. In addition, we reviewed the source of the broker market reports and clarified analytical steps with the contractor. As a result of these steps, we found the private sector’s lease data to be sufficiently reliable for the purpose of comparing independent lease rates to GSA lease rates. To perform the analysis of the performance of the independent leases with the market data acquired, the contractor: Identified private sector lease rates from broker reports by city and submarket to compare with the base year independent lease rates, and developed separate city and submarket rates by building class— that is, Class A and Class B offices—for each year of the analysis; Used the Institute of Real Estate Management’s database to identify the average operating expenses in each annual Class A or B submarket’s rental rate. Used broker reports to identify tenant improvement costs; Matched the appropriate annual Class A or B submarket rental rate to each of the independent leases; and Compared the first year of the independent leases’ rent per square foot to the same year’s submarket’s rate and calculated the percentage differences. The contractor determined that a 10 percent variation above or below the independent lease’s rate was the range where the private sector’s annual Class A or Class B submarket-lease rate was considered equivalent to the independent lease’s rate. To evaluate the extent that selected entities’ leasing policies aligned with leading government leasing practices we first had to establish federal lease acquisition leading practices that could broadly apply to entities with various leasing authorities. We developed a list of leading practices that could be used by all federal entities to help ensure that real-property leasing acquisitions meet mission needs in a cost-effective and transparent manner. To develop our list of leading practices, we reviewed leasing acquisition practices contained in the General Services Acquisition Regulation, the GSA’s Leasing Desk Guide, the Federal Management Regulation, OMB Circular A-11, and prior GAO reports. We identified practices common among these sources that dealt with the portion of the acquisition cycle that starts with identifying a need for space through selecting the offer that meets the entity’s needs at the best value to the entity. The four leading practices we developed were: assess needs, plan ahead, ensure best value, and analyze and document the budget effects of the lease. (See fig. 4.) Once the list was developed, we shared it with GSA officials and incorporated their comments. In defining these leading practices, we identified 10 types of documentation (which we refer to as “sub-practices”) that may be included in a lease file to provide reasonable assurance that the leading practice was used in acquiring the respective leased property. (See fig. 5.) To determine the extent that our selected entity’s policies aligned with our list of leading practices, we reviewed the real property policies provided to us by the entity and documented where there was evidence that leading practices were incorporated into the policies. The evidence was aggregated in a spreadsheet and was verified by another team member. To determine the extent that selected lease files aligned with leading government leasing practices, we reviewed all of the documentation provided by the entities related to 3 warehouse leases and 27 office leases selected in our four select metropolitan areas. In reviewing the lease files, we documented when there was evidence of the leading practices. We created an abstract of each of the selected leases and noted where, if at all, evidence was contained of leading practices within the lease files. Each of the abstracts was then reviewed by another team member and any discrepancies were reconciled. Thus, each lease file was reviewed by two different team members who came to the same decision with regard to the extent that the lease file contained evidence of the leading practices. For the purpose of this analysis, we considered a lease file in alignment with leading practices if there was sufficient documented evidence of the sub-practices in the lease file. We determined the lease file to be in partial alignment if there was some evidence in the file that the sub-practice was implemented, but either the evidence was incomplete, insufficient, or not entirely aligned with the leading practice. We determined the lease file not to be in alignment if there was no evidence of the sub-practice being implemented in the provided documentation. To evaluate the independent leases’ space utilization rates, we collected two pieces of information for each lease. First, we asked officials from our selected federal entities for the number of employees assigned to the selected offices during the first quarter of fiscal year 2016. Second, we determined total rentable square feet per lease. We obtained this information from each lease document that federal entities provided to us. If the square footage was not in a lease, we asked knowledgeable federal entity officials. We divided the number of rentable square feet per lease by the number of employees to determine the space utilization rate. We then compared each lease’s space utilization rate with GSA’s recommended space utilization target of 150 rentable square feet per employee for office space. We did not evaluate the per-employee space utilization of warehouses in our analysis because of the fundamentally different purpose of warehouses as compared to office space— warehouses are primarily intended for storage purposes. We also did not evaluate the utilization of spaces where there were special arrangements in which no rent was paid. In addition to reviewing the leases and obtaining employee counts from our selected federal entities, we also interviewed knowledgeable officials at their respective selected entities regarding the purposes of each of the buildings, and visited each office or warehouse space as part of our review. As a result, we excluded two additional buildings from this analysis since a large percentage of the buildings’ space was not designed for typical office use—one PBGC office housed a large continuity-of-operations spare office space, while another NASA office housed a large computer server facility. In total, we reviewed 30 selected leases for this analysis. Lastly, we interviewed relevant officials from the GSA, OMB, and the Small Agency Council. We conducted this performance audit from June 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In our survey, 52 federal entities reported having independent leasing authority to lease domestic office and warehouse spaces. Of these, 27 entities fell within the specified member agencies of the Federal Real Property Council (FRPC), which are agencies covered under the Chief Financial Officers Act of 1990. (See table 6.) The remaining 25 were federal entities that were not FRPC member entities and also reported having independent authority to lease domestic office and warehouse spaces. (See table 7.) In addition to the individual named above, Keith Cunningham, Assistant Director; Timothy Carr; Catherine Kim; Terence Lam; Hannah Laufe; Steve Rabinowitz; Malika Rice; Kelly Rubin; Sean Standley; Michelle Weathers; and Crystal Wesco made key contributions to this report.
GSA leases real property on behalf of many federal tenants, but some federal entities have statutory independent leasing authority. GAO was asked to review federal entities with independent leasing authority. This report examines (1) what is known about which federal entities have independent leasing authority and their use of this authority; (2) how selected independent leases compare to GSA and private sector leases in terms of cost; and (3) to what extent selected entities have leasing policies and practices that align with leading government practices. GAO conducted a survey of 103 federal entities identified in previous GAO work; selected eight entities for their diversity in size and mission, and visited 37 leased office and warehouse locations; analyzed leases and lease files for the 37 locations; reviewed applicable laws, policies, and guidance; and interviewed GSA, OMB, and officials from the selected entities. There is no comprehensive list of federal entities with independent leasing authority. The Federal Real Property Council (FRPC), chaired by the Office of Management and Budget (OMB), was established in 2004 through executive order to coordinate and share leading practices in real property management among federal agencies covered by the Chief Financial Officers Act of 1990. The General Services Administration (GSA) was directed to create a database intended to be a comprehensive inventory of federal facilities, which resulted in the Federal Real Property Profile (FRPP). However, federal entities that are not members of the FRPC are not required to submit data to the FRPP and few do so. Of the 103 federal entities that GAO surveyed, 52 reported having independent authority to lease office and warehouse space. As of October 1, 2015, these 52 entities leased 944 domestic offices and 164 warehouses. Twenty-five of those entities are not members of the FRPC and therefore not required to submit their real property data to the FRPP despite leasing 243 offices and warehouses. As such, the FRPP's incomplete data set reduces its effectiveness as an oversight and accountability mechanism for entities with independent leasing authority. GAO's review of the costs of 37 selected independent leases found that the rates of most were less costly or comparable to matched GSA leases. When independent leases have lower costs, it may be attributed in part to: (1) GSA's using standardized lease documents that include clauses with higher energy conservation, security, and seismic requirements, and (2) independent leases' having fewer space modifications, more periods of free rent, and private sector real-estate professionals negotiating with potential owners. Most of the independent leases were also less costly or comparable to matched private sector leases, particularly in the National Capital Region. GAO reviewed the extent to which eight selected federal entities had policies that incorporated leading government leasing practices and found that six had policies that generally conformed to these practices. However, none of the lease files contained evidence that the practices were consistently followed. For example, 88 percent of the entities' lease files lacked evidence of ensuring best value by documenting advertisements to seek bids to fill their space needs, and 77 percent lacked evidence that entities effectively planned ahead by documenting the factors they used to evaluate lease offers. In addition, the average of all leases we analyzed for space-use more than doubled GSA's recommended target of 150 rentable square feet per employee for office space. It may help federal entities conform to leading practices and meet utilization targets for the space they lease if they can benefit from the coordination and leading real-property management practices shared at the FRPC. GAO recommends that OMB should establish efficient methods to: include data from non-FRPC members to the FRPP and increase collaboration between FRPC and non-FRPC entities. OMB concurred with both recommendations.
To address the extent to which CMS implemented control procedures over contract actions, we focused on contracts that were generally subject to the FAR (i.e., FAR-based), which represented about $2.5 billion, or about 70 percent, of total obligations awarded in fiscal year 2008. The FAR is the governmentwide regulation containing the rules, standards, and requirements for the award, administration, and termination of government contracts. Based on the standards for internal control, FAR requirements, and agency policies, we identified and evaluated 11 key internal control procedures over contract actions, ranging from ensuring contractors had adequate accounting systems prior to the use of a cost reimbursement contract to certifying invoices for payment. Contract actions include new contract awards and modifications to existing contracts. We conducted our tests on a statistically random sample of 102 FAR-based contract actions CMS made in fiscal year 2008 and projected the results of our statistical sample conservatively by reporting the lower bound of our two-sided, 95 percent confidence interval. We tested a variety of contract actions including a range of dollars obligated, different contract types (fixed price, cost reimbursement, etc.), and the types of goods and services procured. The actions in the sample ranged from a $1,000 firm-fixed price contract for newspapers to a $17.5 million modification of an information technology contract valued at over $500 million. For each contract action in the sample, we determined if the 11 key internal control procedures were implemented by reviewing the contract file supporting the action and, where applicable, by obtaining additional information from the contracting officer or specialist or senior acquisition management. We also tested the reliability of the data contained in CMS’s two acquisition databases. To address the extent to which CMS established a strong control environment for contract management, we obtained and reviewed documentation regarding contract closeout, acquisition planning, and other management information and interviewed officials in the Office of Acquisition and Grants Management (OAGM) about its contract management processes. We also evaluated the extent to which CMS had addressed recommendations we made in our 2007 report. We used the internal control standards as a basis for our evaluation of CMS’s contract management control environment. Appendix I of our October 2009 report provides additional details of our scope and methodology. This testimony is based on our October 2009 performance audit, which was conducted from July 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Except for certain Medicare claims processing contracts, CMS contracts are generally required to be awarded and administered in accordance with general government procurement laws and regulations such as the FAR; the Health and Human Services Acquisition Regulations (HHSAR); the Cost Accounting Standards (CAS); and the terms of the contract. Since 1998, CMS’s obligations to fiscal intermediaries, carriers, and Medicare Administrative Contractors (contractors that primarily process Medicare claims) have decreased approximately 16 percent. In contrast, obligations for other-than-claims processing contract activities, such as the 1-800 help line, information technology and financial management initiatives, and program management and consulting services, have increased 466 percent. These trends may be explained in part by recent changes to the Medicare program, including the movement of functions, such as the help line, data centers, and certain financial management activities, from the fiscal intermediaries and carriers to specialized contractors. MMA required CMS to transition its Medicare claims processing contracts, which generally did not follow the FAR, to the FAR environment through the award of contracts to Medicare Administrative Contractors. CMS projected that the transition, referred to as Medicare contracting reform, would produce administrative cost savings due to the effects of competition and contract consolidation as well as produce Medicare trust fund savings due to a reduction in the amount of improper benefit payments. Additionally, the transition would subject millions of dollars of CMS acquisitions to the rules, standards, and requirements for the award, administration, and termination of government contracts in the FAR. Obligations to the new Medicare Administrative Contractors were first made in fiscal year 2007. CMS is required to complete Medicare contracting reform by 2011. As of September 1, 2009, 19 contracts had been awarded to Medicare Administrative Contractors, totaling about $1 billion in obligations. The Standards for Internal Control in the Federal Government provide the overall framework for establishing and maintaining internal control and for identifying and addressing areas at greatest risk of fraud, waste, abuse, and mismanagement. These standards provide that—to be effective—an entity’s management should establish both a supportive overall control environment and specific control activities directed at carrying out its objectives. As such, an entity’s management should establish and maintain an environment that sets a positive and supportive attitude towards control and conscientious management. A positive control environment provides discipline and structure as well as the climate supportive of quality internal control, and includes an assessment of the risks the agency faces from both external and internal sources. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to address risks. The standards further provide that information should be recorded and communicated to management and oversight officials in a form and within a time frame that enables them to carry out their responsibilities. Finally, an entity should have internal control monitoring activities in place to assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Control activities include both preventive and detective controls. Preventive controls—such as invoice review prior to payment—are controls designed to prevent errors, improper payments, or waste, while detective controls—such as incurred cost audits—are designed to identify errors or improper payments after the payment is made. A sound system of internal control contains a balance of both preventive and detective controls that is appropriate for the agency’s operations. While detective controls are beneficial in that they identify funds that may have been inappropriately paid and should be returned to the government, preventive controls such as accounting system reviews and invoice reviews help to reduce the risk of improper payments or waste before they occur. A key concept in the standards is that control activities selected for implementation be cost beneficial. Generally it is more effective and efficient to prevent improper payments. A control activity can be preventive, detective, or both based on when the control occurs in the contract life cycle. Additional, detailed background information is available in our related report, GAO-10-60. Our October 2009 report identified pervasive deficiencies in internal control over contracting and payments to contractors. Specifically, as a result of our work, we estimated that at least 84.3 percent of FAR-based contract actions made by CMS in fiscal year 2008 contained at least one instance in which 1 of 11 key controls was not adequately implemented. Not only was the number of internal control deficiencies widespread, but also many contract actions had more than one deficiency. We estimated that at least 37.2 percent of FAR-based contract actions made in fiscal year 2008 had three or more instances in which a key control was not adequately implemented. The internal control deficiencies occurred throughout the contracting process and increased the risk of improper payments or waste. These deficiencies were due in part to a lack of agency-specific policies and procedures to ensure that FAR requirements and other control objectives were met. CMS also did not take appropriate steps to ensure that existing policies were properly implemented or maintain adequate documentation in its contract files. Further, CMS’s Contract Review Board process had not been properly or effectively implemented to help ensure proper contract award actions. These internal control deficiencies are a manifestation of CMS’s weak overall control environment, which is discussed later. Additional, detailed information on our testing of key internal controls is available in our October 2009 report. The high percentage of deficiencies indicates a serious failure of control procedures over FAR-based acquisitions, thereby creating a heightened risk of improper payments or waste. Highlights of the control deficiencies we noted included the following. We estimated that at least 46.0 percent of fiscal year 2008 CMS contract actions did not meet the FAR requirements applicable to the specific contract type awarded. For example, we found that CMS used cost reimbursement contracts without first ensuring that the contractor had an adequate accounting system. According to the FAR, a cost reimbursement contract may be used only when the contractor’s accounting system is adequate for determining costs applicable to the contract. To illustrate, of the contract awards in our sample, we found nine cases in which cost reimbursement contracts were used without first ensuring that the contractor had an adequate accounting system. In addition to these nine cases, during our review of contract modifications we observed another six cases in which cost reimbursement contracts were used even though CMS was aware that the contractor’s accounting system was inadequate at the time of award. In one instance, the contracting officer was aware that a contractor had an inadequate accounting system resulting from numerous instances of noncompliance with applicable Cost Accounting Standards. Using a cost reimbursement contract when a contractor does not have an adequate accounting system hinders the government’s ability to fulfill its oversight duties throughout the contract life cycle and increases risk of improper payments and the risk that costs billed cannot be substantiated during an audit. We estimated that for at least 40.4 percent of fiscal year 2008 contract actions, CMS did not have sufficient support for provisional indirect cost rates nor did it identify instances when a contractor billed rates higher than the rates that were approved for use. Provisional indirect cost rates provide agencies with a mechanism by which to determine if the indirect costs billed on invoices are reasonable for the services provided until such time that final indirect cost rates can be established, generally at the end of the contractor’s fiscal year. When the agency does not maintain adequate support for provisional indirect rates, it increases its risk of making improper payments. We estimated that for at least 52.6 percent of fiscal year 2008 contract actions, CMS did not have support for final indirect cost rates or support for the prompt request of an audit of indirect costs. The FAR states that final indirect cost rates, which are based on a contactor’s actual indirect costs incurred during a given fiscal year, shall be used in reimbursing indirect costs under cost reimbursement contracts. The amounts a contractor billed using provisional indirect cost rates are adjusted annually for final indirect cost rates, thereby providing a mechanism for the government to timely ensure that indirect costs are allowable and allocable to the contract. CMS officials told us that they generally adjust for final indirect cost rates during contract closeout at the end of the contract performance rather than annually mainly due to the cost and effort the adjustment takes. However, CMS did not promptly close out its contracts and had not made progress in reducing the backlog of contracts eligible for closeout. Specifically, in 2007, we reported that CMS’s backlog was 1,300 contracts, of which 407 were overdue for closeout as of September 30, 2007. This backlog continued to increase, and CMS officials stated that as of July 29, 2009, the total backlog of contracts eligible for closeout was 1,611, with 594 overdue based on FAR timing standards. Not annually adjusting for final indirect cost rates increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. Furthermore, putting off the control activity until the end of contract performance increases the risk of overpaying for indirect costs during contract performance and may make identification or recovery of any unallowable costs during contract closeout more difficult due to the passage of time. We estimated that for at least 54.9 percent of fiscal year 2008 contract actions, CMS did not promptly perform or request an audit of direct costs. Similar to the audit of indirect costs, audits of direct costs allow the government to verify that the costs billed by the contractor were allowable, reasonable, and allocable to the contract. Not annually auditing direct costs increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. We estimated that for at least 59.0 percent of fiscal year 2008 contract actions, the project officer did not always certify the invoices. CMS’s Acquisition Policy Notice 16-01 requires the project officer to review each contractor invoice and recommend payment approval or disapproval to the contracting officer. This review is to determine, among other things, if the expenditure rate is commensurate with technical progress and whether all direct cost elements are appropriate, including subcontracts, travel, and equipment. We noted in our 2007 report that CMS used negative certification—a process whereby it paid contractor invoices without knowing whether they were reviewed and approved—in order to ensure invoices were paid in a timely fashion. In October 2009 we reported that negative certification continued to be CMS’s policy to process contractor invoices for payment. This approach, however, significantly reduces the incentive for contracting officers, specialists, and project officers to review the invoice prior to payment. For example, in one case, although a contractor submitted over 100 invoices for fiscal year 2008, only 8 were certified by the project officer. The total value of the contract through January 2009 was about $64 million. In addition, based on a cursory review of the fiscal year 2008 invoices submitted for payment, we found instances in which the contracting officer or specialist did not identify items that were inconsistent with the terms of the contract or acquisition regulations. For example, we found two instances where the contractor billed, and CMS paid, for items generally disallowed by HHSAR. Reviewing invoices prior to payment is a preventive control that may result in the identification of unallowable billings, especially on cost reimbursement and time and materials invoices, before the invoices are paid. CMS increases its risk of improper payments when it does not properly review and approve invoices prior to payment. The control deficiencies we identified in the statistical sample discussed in our October 2009 report stemmed from a weak overall control environment. CMS’s control environment was characterized by the lack of (1) strategic planning to identify necessary staffing and funding; (2) reliable data for effectively carrying out contract management responsibilities; and (3) follow-up to track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. A positive control environment sets the tone for the overall quality of an entity’s internal control, and provides the foundation for an entity to effectively manage contracts and payments to contractors. Without a strong control environment, the control deficiencies we identified will likely persist. Following is a summary of the weaknesses we found in CMS’s overall control environment: Limited analysis of contract management workforce and related funding needs. OAGM management had not analyzed its contract management workforce and related funding needs through a comprehensive, strategic acquisition workforce plan. Such a plan is critical to help manage the increasing acquisition workload and meet its contracting oversight needs. We reported in November 2007 that staff resources allocated to contract oversight had not kept pace with the increase in CMS contract awards. In our 2009 report, we found a similar trend continued into 2008. While the obligated amount of contract awards had increased 71 percent since 1998, OAGM staffing resources—its number of full time equivalents (FTE)—had increased 26 percent. This trend presents a major challenge to contract award and administration personnel who must deal with a significantly increased workload without additional support and resources. In addition, according to its staff and management, OAGM faced challenges in meeting the various audit requirements necessary to ensure adequate oversight of contracts that pose more risk to the government, specifically cost reimbursement contracts, as well as in performing the activities required of a cognizant federal agency (CFA). Although officials told us they could use more audit funding, we found that OAGM management had yet to determine what an appropriate funding level should be. Without knowing for which contractors additional CFA oversight was needed, CMS did not have reliable information on the number of audits and reviews that must be performed annually or the depth and complexity of those audits. Without this key information, CMS could not estimate an adequate level of needed audit funding. The risks of not performing CFA duties are increased by the fact that other federal agencies that use the same contractors rely on the oversight and monitoring work of the CFA. A shortage of financial and human resources creates an environment that introduces vulnerabilities to the contracting process, hinders management’s ability to sustain an effective overall control environment, and ultimately increases risk in the contracting process. Lack of reliable contract management data. Although CMS had generally reliable information on the basic attributes of each contract action, such as vendor name and obligation amount, CMS lacked reliable management information on other key aspects of its FAR-based contracting operations. For example, in our October 2009 report we identified acquisition data errors related to the number of certain contract types awarded, the extent of competition achieved, and total contract value. Standards for internal control provide that for an agency to manage its operations, it must have relevant, reliable, and timely information relating to the extent and nature of its operations, including both operational and financial data, and such information should be recorded and communicated to management and others within the agency who need it and in a form and within a time frame that enables them to carry out their internal control and operational responsibilities. The acquisition data errors were due in part to a lack of sufficient quality assurance activities over the data entered into the acquisition databases. Without accurate data, CMS program managers did not have adequate information to identify, monitor, and correct or mitigate areas that posed a high risk of improper payments or waste. Lack of follow-up to resolve contract audit and evaluation findings. CMS did not track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. Tracking audit and evaluation findings strengthens the control environment in part because it can help assure management that the agency’s objectives are being met through the efficient and effective use of the agency’s resources. It can also help management determine whether the entity is complying with applicable acquisition laws and regulations. Contract audits and evaluations can add significant value to an organization’s oversight and accountability structure, but only if management ensures that the results of these audits and evaluations are promptly investigated and resolved. For example, in an audit report dated September 30, 2008, the Defense Contract Audit Agency questioned approximately $2.1 million of costs that CMS paid to a contractor in fiscal year 2006. As discussed in our October 2009 report, OAGM management confirmed that no action had been taken at that time to investigate and recover the challenged costs. As we reported in October 2009, CMS management had not taken substantial actions to address our 2007 recommendations to improve internal control in the contracting process. Only two of GAO’s nine 2007 recommendations had been fully addressed. Table 1 summarizes our assessment of the status of CMS’s actions to address our recommendations. In addition to reaffirming the 7 substantially unresolved 2007 recommendations, our October 2009 report included 10 recommendations to further improve oversight and strengthen CMS’s control environment. Specifically, we made recommendations for additional procedures or plans to address the following 10 areas: document compliance with FAR requirements for different contract types; document provisional indirect cost rates in the contract file; specify what constitutes timely performance of (or request for) audits of contractors’ billed costs; specify circumstances for the use and content of negotiation memorandums, including any required secondary reviews; specify Contract Review Board documentation, including resolution of issues identified during the CRB reviews; conduct periodic reviews of contract files to ensure invoices were properly reviewed by both the project officer and contracting officer or specialist; develop a comprehensive strategic acquisition workforce plan, with resource needs to fulfill FAR requirements for comprehensive oversight, including CFA duties; revise the verification and validation plan to require all relevant acquisition data errors be corrected and their resolution documented; develop procedures for tracking contract audit requests and the resolution of audit findings; and develop procedures that clearly assign roles and responsibilities for the timely fulfillment of CFA duties. In commenting on a draft of our October 2009 report, CMS and HHS agreed with each of our 10 new recommendations and described steps planned to address them. CMS also stated that the recommendations will serve as a catalyst for improvements to the internal controls for its contracting function. CMS also expressed concerns about our assessment of key internal controls and disagreed with our conclusions on the status of CMS’s actions to address our November 2007 recommendations. CMS stated its belief that “virtually all” of the errors we identified in our statistical sample related to “perceived documentation deficiencies.” CMS also expressed concern that a reasonable amount of time had not yet elapsed since the issuance of our November 2007 report to allow for corrective actions to have taken place. However, as discussed in greater detail in our October 2009 report response to agency comments, nearly 2 years had elapsed between our November 2007 and October 2009 reports and CMS had made little progress in addressing the recommendations from our November 2007 report. Further, a significant number of our October 2009 report findings, including weaknesses in the control environment, were based on observations and interviews with OAGM officials and reviews of related documentation such as policies and strategic plans. Finally, the deficiencies we identified negatively impact the key controls intended to help ensure compliance with agency acquisition regulations and the FAR. In conclusion, Madam Chairman, while we have not updated the status of any CMS actions to address our October 2009 findings and recommendations, the extent to which control weaknesses in CMS’s contracting activities continue, raises questions concerning whether CMS management has established an appropriate “tone at the top” to effectively manage these key activities. Until CMS management addresses our previous recommendations in this area, along with taking action to address the additional deficiencies identified in our October 2009 report, its contracting activities will continue to pose significant risk of improper payments, waste, and mismanagement. Further, the deficiencies we identified are likely to be exacerbated by the rise in obligations for non- claims processing contract awards as well as CMS’s extensive reliance on contractors to help achieve its mission objectives. It is imperative that CMS address its serious contract-level control deficiencies and take action on our recommendations to improve overall environment controls or CMS will continue to place billions of taxpayer dollars at risk of fraud, or otherwise improper contract payments. We commend the Subcommittee for its continuing oversight and leadership in this important area and believe that hearings such as the one being held today will be critical to ensuring that CMS’s continuing contract management weaknesses are resolved without further delay and that overall risks to the government are substantially reduced. Madam Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Kay L. Daly at (202) 512-9095 or dalykl@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Marcia Carlsen and Phil McIntyre (Assistant Directors), Sharon Byrd, Richard Cambosos, Francine DelVecchio, Abe Dymond, John Lopez, Ron Schwenn, Omar Torres, Ruth Walk, and Danietta Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2007, GAO reported significant deficiencies in internal control over certain contracts the Centers for Medicare and Medicaid Services (CMS) awarded under the Federal Acquisition Regulation (FAR). This Subcommittee and others in Congress asked GAO to perform an in-depth review of CMS's contract management practices. This testimony is based on GAO's October 2009 report on these issues and summarizes GAO's findings on the extent to which CMS (1) implemented effective control procedures over contract actions, (2) established a strong contract management control environment, and (3) implemented GAO's 2007 recommendations. GAO used a statistical random sample of 2008 CMS contract actions to assess CMS internal control procedures. The results were projected to the population of 2008 CMS contract actions. GAO reviewed contract file documentation and interviewed senior acquisition management officials. GAO reported in October 2009 that pervasive deficiencies in CMS contract management internal control increased the risk of improper payments or waste. Specifically, based on a statistical random sample of 2008 CMS contract actions, GAO estimated that at least 84.3 percent of fiscal year 2008 contract actions contained at least one instance where a key control was not adequately implemented. For example, CMS used cost reimbursement contracts without first ensuring that the contractor had an adequate accounting system, as required by the FAR. These deficiencies were due in part to a lack of agency-specific policies and procedures to help ensure proper contracting expenditures. These control deficiencies stemmed from a weak overall control environment characterized primarily by inadequate strategic planning for staffing and funding resources. CMS also did not accurately capture data on the nature and extent of its contracting, hindering CMS's ability to manage its acquisition function by identifying areas of risk. Finally, CMS did not track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. A positive control environment sets the tone for the overall quality of internal control and provides the foundation for effective contract management. Without a strong control environment, the specific control deficiencies GAO identified will likely persist. As of the date of GAO's October 2009 report, CMS had not substantially addressed seven of the nine recommendations made by GAO in 2007 to improve internal control over contracting and payments to contractors. To the extent that CMS has continuing weaknesses in contracting activities, it will continue to put billions of taxpayer dollars at risk of improper payments or waste.
IRS has two major programs to collect tax debts: telephone collection and field collection. If taxpayers become delinquent (that is, do not pay their taxes after being notified of amounts owed), IRS staff assigned to the telephone collection program may attempt collection over the phone or in writing. According to IRS officials, IRS collection staff who make phone calls have not been initiating many calls to ask taxpayers to pay their tax debts but rather have been responding to phone calls from taxpayers about mailed tax due notices. If more in-depth collection action or analyses of the taxpayer’s ability to pay tax debt is required, telephone collection staff may refer the case to field collections, where staff may visit delinquent taxpayers at their homes or businesses as well as contact them by telephone and mail. Under certain circumstances, the telephone or field staff are authorized to initiate enforced collection action, such as recording liens on taxpayer property and sending notices to levy taxpayer wages, bank accounts, and other financial assets held by third parties. Field staff also can be authorized to seize other assets owned by the taxpayer to satisfy the tax debt. As we have previously reported, in recent years IRS has deferred collection action on billions of dollars of delinquent tax debt and IRS collection program performance indicators have declined. By the end of fiscal year 2003, IRS’s inventory of tax debt with some collection potential was $120 billion (up from $112 billion in the previous year). As we reported in May 2002, from fiscal years 1996 through 2001, IRS had almost universal declines in collection performance, including declines in coverage of workload, cases closed, direct staff time used, productivity, and dollars of unpaid taxes collected. Although IRS’s collection workload declined, the collection cases closed declined more rapidly, increasing the gap between the number of cases assigned for collection action and the number of cases closed each year (see fig. 2 in app. I). As a result, in March 1999, IRS started deferring collection action on billions of dollars in delinquencies. By the end of fiscal year 2002, IRS had deferred collection action on about $15 billion, and, as of May 2003, was deferring action on about one of every three collection cases. Furthermore, IRS’s collection staffing has declined overall comparing 1996 to 2003 (see fig. 3 in app. I) despite IRS’s efforts to increase collection staffing in its budget requests since 2001. As we previously reported, IRS officials have said that collection staffing declines and delays in hiring have been caused by increased workload in other essential operations (such as as processing returns, issuing refunds, and answering taxpayer mail), other priorities (such as taxpayer service), and unbudgeted cost increases (such as rent and pay increases). According to statements by the previous and current IRS commissioners, IRS’s growing workload has outpaced its resources. The former IRS Commissioner’s report to the IRS Oversight Board during September 2002 made a case for additional staff to check tax compliance and collect taxes owed. The Commissioner recognized that IRS needed to improve the productive use of its current resources, but also cited a need for an annual 2 percent staffing increase over 5 years to help reverse the trends. According to the Commissioner, IRS would require 5,450 new full-time collection staff. IRS officials said that the PCA program proposal was undertaken because it is unlikely that IRS will receive funding adequate to handle the growing collection workload. Since current law requires IRS to collect tax debts, legislation has been proposed to authorize IRS to use PCAs to collect simpler tax debts under defined activities—including locating taxpayers, requesting full payment of the tax debt or offering taxpayers an installment agreement if full payment cannot be made, and obtaining financial information from taxpayers. Given the limited authorities proposed for PCAs, IRS would refer those cases that are simplest to collect and have no need for IRS enforcement action, including cases in which (1) taxpayers filed a tax return showing taxes due but that have not been paid and (2) taxpayers made three or more voluntary payments to satisfy an additional tax assessed by IRS but have stopped the payments. In 1996, Congress directed IRS to test the use of PCAs, earmarking $13 million for that purpose. IRS canceled the pilot project in 1997, in part, because it resulted in significantly lower amounts of collections and contacted significantly fewer taxpayers than expected (about 14,000 of 153,000 taxpayers). IRS reported that through January 1997, this program accounted for about $3.1 million in collections and about $4.1 million in expenses ($3.1 million in design, start-up, administrative expenses, and about $1 million in PCA payments). IRS also reported lost opportunity costs of about $17 million because IRS collection staff shifted from collecting taxes to helping with the pilot. The current proposal to use PCAs has some significant differences from the 1996 pilot test of PCAs. First, PCAs in the current proposal will actually try to resolve collection cases within certain guidelines. In the 1996 test, PCAs only contacted taxpayers to remind them of their outstanding tax debt and suggest payment options. Second, PCAs under the current proposal will be paid a percentage of dollars they help collect from a revolving fund of all PCA collections. In the 1996 test, PCAs were paid a fixed fee for such actions as successfully locating and contacting taxpayers, even if payments were not received. Third, IRS will electronically transmit cases and data about the taxpayer and taxes owed to PCAs. In 1996, IRS’s computers were not set up to electronically transmit the cases and data to PCAs. For the current proposal, IRS intends to develop the capability to make secure transmissions to PCAs and protect confidentiality. To identify the critical success factors for contracting with PCAs for tax debt collection, we used multiple sources. We reviewed three of our reports on leading practices in contracting and interviewed our staff that review government contracting. We also interviewed parties with experience in contracting for government debt collection, including both tax and non-tax debt, to identify any factors common to both debt types. Specifically, we interviewed officials from 11 state revenue departments that, according to officials from the Federation of Tax Administrators (FTA), represented a mix—in aspects such as amount of resources and PCA roles—of experience in contracting with PCAs for tax debt collection and provided examples of program practices in such areas as case selection and use of performance data; the Department of the Treasury’s Financial Management Service and Department of Education—two federal agencies with large-scale, non- tax debt collection contracting; and the three PCA firms that IRS selected as subject matter experts to assist in drafting the provisions of a contract for PCA collection services. To help corroborate the factors that others identified, we interviewed officials from the IRS office that is developing the proposed PCA program, the IRS Office of Taxpayer Advocate, and the National Treasury Employees Union, which represents IRS employees. To summarize and categorize the critical success factors identified, we grouped together similar factors that were most frequently cited by the officials with experience in government debt collection contracting. We first grouped factors associated with the start of a program and with a maturing program into two broad time-oriented factors, including topics we identified as implicit in the interviews and documents cited above. Between these two time-oriented factors, we categorized three other factors according to the broad topics that were most frequently cited. To validate our summarization and categorization, we asked for comments on our draft list of critical success factors from those who we had consulted to identify the factors as well as from officials at four additional PCA firms that, according to interviewed officials from two state revenue departments and the two federal agencies, had experience in government debt collection. In commenting on the draft list of factors, some officials stressed certain factors more than others or elaborated on selected factors or subfactors, but generally did not suggest factors beyond those encompassed in our draft list. We made changes based on their comments where appropriate. To determine whether IRS has addressed the critical success factors in developing the PCA contracting program and, if not, what is left to be done, we interviewed IRS program officials. We analyzed program documents, including the draft PCA contract as outlined in IRS’s Request for Quotes (RFQ) and the Office of Management and Budget (OMB) Form E-300 budgetary document that describes goals and plans for the program. We did not attempt to analyze how well or to what extent IRS addressed the factors, or whether IRS made the right decisions on issues such as the program goals or measures. To determine whether, if IRS receives authority to use PCAs, it will do a study that will enable policymakers to judge whether contracting with PCAs is the best use of federal funds to achieve IRS’s collection objectives, we interviewed IRS program officials. We reviewed any studies IRS had done to compare the use of PCAs with other strategies and assessed IRS’s intended approach for any future studies. We also applied our knowledge of how to study the cost-effectiveness of options to meet a desired result or benefit. We did our work from June 2003 through March 2004 in accordance with generally accepted government auditing standards. Our work identified and validated five broad factors that are critical to the success of a proposed program for contracting with PCAs to collect tax debt. A general description of each critical success factor follows: Results orientation involves establishing expectations, measures, and desired results for the program. Agency resources involve obtaining and deploying various resources. Workload involves ensuring that the appropriate cases and case information are provided to PCAs. Taxpayer issues involve ensuring that taxpayer privacy and other rights are protected. Evaluation involves monitoring performance and collecting data to assess the performance of PCAs and the overall program. As figure 1 illustrates, the factors are considered “success” factors because each one, if adequately addressed, can help ensure that the PCA program achieves desired results, such as in collecting tax debts. Although addressing all factors during program design and implementation does not guarantee success, doing so could improve the chances. Table 1 further describes the critical success factors by showing their related subfactors that we identified and validated. IRS has taken steps to address the critical success factors and developed a project plan to help finish addressing the factors if Congress authorizes use of PCAs. Officials recognize that much work needs to be done to sufficiently address each factor, which they estimate will take 18 to 24 months after any legislation passes. Table 2 shows examples of the key actions taken to address the critical success factors and major tasks remaining. Discussion after table 2 elaborates on some of these major tasks. IRS officials are aware of these major tasks that must be completed to address the critical success factors and implement the PCA program. In discussing their intent to address them, IRS officials elaborated on some of the major tasks. Under “results orientation,” IRS is aware that it has to clarify a goal on how much it expects to collect. IRS had estimated originally that the PCA program would result in $9 billion in tax collections and produce $7.2 billion in net revenue over 10 years. The Department of the Treasury estimated that $1.5 billion in net revenue would be produced over 10 years. IRS officials said the differences arise because each estimate was done differently. IRS acknowledged that its original estimate may be too high and is reworking it in light of the Treasury estimate. Under “workload,” IRS officials said that they are aware of the importance of selecting the right cases to send to PCAs for collection and plan to use consumer credit history data on delinquent taxpayers to identify those that would be more likely to pay if contacted. IRS officials said that the new case collection system will extend beyond selecting cases for PCAs, and that the experience and knowledge IRS will gain would contribute to IRS’s broader modernization program for using data to improve how IRS does collection work. For example, IRS officials said that, in the future, the case selection data might be used to help determine which collection method—such as sending notices, using PCAs, or making in-person contact—might be more effective in attempting collection from a given taxpayer. Under “evaluation,” IRS officials said that they were aware that they had not developed plans or dates for evaluating the program to assess how well the PCA program achieves its results. IRS officials said that developing the evaluation was premature given the other work needed to develop the program and lack of legislative authority. IRS officials said they intend to start developing the evaluation plan after they receive this authority and to finish it before sending cases to PCAs. Evaluation plans developed before program implementation increase the likelihood that the necessary data and resources for proper evaluation will be available when needed. Many of the factors involve the development of an information system. Testing of information systems being developed for the PCA program is an important task left to do. Our interviews with IRS officials and our reviews of IRS documents indicate that IRS plans on testing the information systems to be used in the PCA program. IRS officials informed us that they have slowed development of the program due to funding constraints and uncertainty over whether and when legislation will pass to authorize contracts with PCAs. Because IRS’s fiscal year 2004 budget was not passed until January 2004, IRS officials said that, since September 2003, IRS slowed work on the PCA program. These officials said that, because of various budgetary procedures, the appropriated funds were not released to the PCA program until March 2004. However, the officials explained that IRS, intending to be fiscally prudent, is delaying spending of the funds until passage of the legislation appears to be more imminent. IRS officials stated that if legislation to authorize the program was not passed during 2004, IRS eventually would suspend work on developing the program. These officials said that they have been balancing and managing their existing funds and the timing of their work given that the authorizing legislation might not pass. If this legislation passes, IRS officials said that they would need another 18 to 24 months afterwards to complete the many tasks remaining, as shown in table 2. IRS officials said that, if Congress passes authorizing legislation in summer 2004, the estimated date for starting to send cases to PCAs is July 2006. Although IRS officials intend to study the relative performance of PCAs and IRS employees in collecting delinquent taxes, the study approach under initial consideration would provide policymakers limited information to judge whether and when the PCA strategy is the best use of resources. The tentative idea for a design—comparing PCA and IRS performance for similar types of simple cases that would be sent to PCAs—does not recognize that IRS officials believe that using employees on these cases would not be their best use given the need to work on other, higher priority cases. Among other issues concerning the proposed use of PCAs, policymakers and others have questioned whether using PCAs to collect tax debts is more efficient or effective than having IRS employees do so. During consideration of IRS’s proposal, some members of Congress questioned whether IRS could collect the taxes that IRS plans to assign to PCAs at less cost or whether IRS would be able to collect a higher portion of the taxes that are due. During hearings, some witnesses raised similar concerns. IRS officials have said that IRS employees might be more effective than PCAs in collecting delinquent taxes because IRS employees have greater powers to enforce collections. These powers (such as tax liens and wage levies) may enable IRS employees to collect a higher portion of the taxes from the same types of cases on which PCAs would work. IRS officials said that the proposal to use PCAs to collect simpler tax debts was not based on a judgment that PCAs would necessarily be more efficient or effective in collecting delinquent tax debt. Rather the proposal was based on a judgment that Congress was unlikely to approve a substantial increase in IRS’s budget to fund additional staff for the collection function. Officials believed that the growing inventory of tax debts was not a good signal to taxpayers about the importance of complying with their tax obligations. Given constraints in hiring staff, IRS officials said that using PCAs was the only practical means available to begin working on significantly more collection cases that otherwise would not be worked on due to IRS staffing constraints. Although this policy judgment served as the rationale behind the PCA proposal, in March 2004, IRS provided us with projections of revenues and federal government costs for the proposed PCA program compared to projections for an alternative approach under which IRS would hire additional staff to work on the same volume for selected types of cases on which the PCAs would work. According to the analysis, PCAs would generate $4.6 in revenue for every dollar in cost and IRS employees would generate $4.1. We did not review the data and assumptions that underlie these revenue and cost projections because the comparison that IRS constructed did not address the relevant economic question for policymakers seeking to reduce the backlog of uncollected taxes—which is, what is the least costly approach for reaching a certain revenue collection goal. IRS’s analysis did not examine other feasible approaches that IRS might be able to use, if given additional resources, to collect the same amount of revenue that the PCAs would bring in, but at lower cost. Assuming IRS receives authority to use PCAs, IRS officials said they would design a study to compare the performance of PCAs versus IRS employees. However, the study approach under initial consideration would provide policy makers limited information to help determine whether the use of PCAs as currently proposed is the best use of federal resources to collect tax debts. IRS’s approach might show whether PCAs or IRS employees are best at working on certain types of collection cases, but would not show whether the use of PCAs as planned would be the best use of resources to deal with the overall collection workload. IRS officials said that although they believe they should conduct a study that compares PCA results to results achieved by IRS employees, they have not designed such a study. They expect to design the study after authorization to use PCAs is enacted and before sending cases to PCAs. Although the study approach will evolve, officials said that they are considering selecting a sample of the same type of simpler cases that will be sent to PCAs and having such cases also sent to a group of IRS telephone collection employees. The results generated by these IRS employees and by PCAs would be compared to see which option is more effective; how effectiveness would be defined and measured would be determined in designing the study. This potential design would help answer the relatively narrow—but important—question of whether and when PCAs or IRS employees are a better choice for working on the specific types of cases to be sent to PCAs. However, IRS officials told us that using IRS employees on these simpler cases would be less productive than assigning them to work on a different mix of collection cases. These officials said that the simpler cases IRS plans to assign to PCAs are generally not those cases that IRS would assign to any additional collection employees, if hired. IRS employees would work on more complex cases that fit their skills and enforcement powers and that have a higher priority due to such factors as the type and amount of tax debt or length of the delinquency. Generally, federal officials are responsible for ensuring that they are carrying out their responsibilities as efficiently and effectively as possible. Various federal and IRS guidance reinforces this responsibility. For example, according to OMB Circular A-123 “the proper stewardship of Federal resources is a fundamental responsibility of agency managers and staff. Federal employees must ensure that government resources are used efficiently and effectively to achieve intended program results.” OMB Circular A-94 states that agencies should have a plan for periodic, results- oriented studies of program effectiveness to, among other purposes, help determine whether the anticipated benefits and costs have been realized and program corrections are needed. IRS guidance states that in selecting among course of action options, IRS managers should determine which is the most realistic and most cost effective. Further, IRS has adopted a critical job responsibility for its managers that specifies their responsibility to achieve goals by leveraging available resources to maximize efficiency and produce high-quality results. A study that focuses on the least costly approach to collecting a desired amount of tax debts would be more in line with federal guidance than the study that officials anticipate performing. Such a study would more likely answer the broader question of how IRS can be the most efficient and effective in achieving its collection goals. One alternative design might entail comparing the results of using PCAs to the results from using the same amount of funds to be paid to PCAs in an unconstrained manner that IRS determines to be the most effective overall way of achieving its collection goals. Determining the most effective and efficient overall way of achieving collection goals would undoubtedly require some judgment. However, because IRS is developing a new case selection model for its own use, after some experience is gained both with using PCAs and with new IRS case selection processes, IRS should have better data to use in determining the best way of achieving its collection goals. If using PCAs as expected under the current proposal meets IRS’s collection goals at less cost than the best unconstrained alternative, policymakers could be comfortable with continuing their use. If not, policy makers would have information available to consider whether changes in the use of PCAs would be appropriate. Regardless of the approach chosen, IRS would have to address several challenges in designing a study to compare the use of PCAs and IRS employees. For instance, contracting for PCA assistance may provide flexibility over hiring additional IRS staff. To recruit, select, and train the new staff, IRS could need many months or more and, if experienced staff assists in training newly hired staff, the experienced staff would not be able to handle normal workloads. Further, if the collection workload were to decrease, IRS may be able to reduce contract commitments more rapidly than it could reassign and, if needed, retrain IRS staff. To some extent, the study would have to account for similar types of direct and opportunity costs to hire, train, assign, and release employees of the PCA contractor. Accounting for these and other factors raises challenges to the design of a comparative study. Because IRS would not assign cases to PCAs for collection until 2006, it will have time to take these challenges into account and to better ensure that its study would be useful to policy makers. Further, in designing the study, IRS would have time to identify the data that would be needed for the study and develop systems or processes for collecting the data. IRS has an inventory of over $100 billion dollars of tax debts that has some potential for being collected. In recent years, IRS has deferred collection actions on billions of dollars of debt because it lacked collection staff to do the work. The growth in the backlog of unpaid taxes poses a risk to our voluntary tax system, particularly as IRS has fallen further behind in pursuing existing as well as new tax debt cases. We have placed the collection of unpaid taxes on our high-risk list since 1990 due to the potential revenue losses and the threat to voluntary compliance with our tax laws. Accordingly, we believe that effective steps need to be taken to improve the collection of these unpaid taxes. Because we did not analyze available options in this review, we are not taking a position on whether the use of PCAs is a preferable option. However, doing nothing more than has been done recently is not preferable. The compliance signals sent to taxpayers from the backlog of delinquent tax debts are not appropriate. When the majority of taxpayers receiving phone calls from IRS are those who respond to written IRS notices, taxpayers and practitioners may conclude that failing to respond to IRS is an effective tactic for avoiding tax responsibilities. If Congress does authorize PCA use, IRS’s planning and preparations to address the critical success factors for PCA contracting provide greater assurance that the PCA program is heading in the right direction to meet its goals and achieve desired results. Nevertheless, much work and many challenges remain in addressing the critical success factors and helping to maximize the likelihood that a PCA program would be successful. Although IRS did an analysis that suggests that using PCAs may be a somewhat more efficient means to collect certain types of delinquent debts, that analysis was not done in a manner that informs policymakers whether the proposed use of PCAs is the least costly option to achieve IRS’s collection goals. Further, given the lack of experience in using PCAs to collect tax debts, key assumptions are untested. Accordingly, if Congress authorizes the use of PCAs, Congress and IRS would benefit from a study that uses the experience gained with PCAs and by IRS itself in using new case selection processes to better determine whether and how the use of PCAs fits into an overall collection strategy that is designed to most effectively and efficiently collect delinquent taxes. Although IRS officials have preliminary plans to do a study that compares the use of PCAs and IRS employees to work the same type of cases, this study design would not help policymakers in Congress and the executive branch judge whether using PCAs as currently proposed is the best use of scarce federal resources. If Congress authorizes the use of PCAs, as soon as practical after experience is gained using PCAs, the IRS Commissioner should ensure that a study is completed that compares the use of PCAs to a collection strategy that officials determine to be the most effective and efficient overall way of achieving collection goals. The Commissioner of Internal Revenue provided written comments on a draft of this report in a letter dated May 14, 2004 (see app. III). In the letter, the Commissioner said that our findings would help IRS focus its PCA program development efforts on those areas most critical to success of the program if Congress authorizes IRS’s use of PCAs. He agreed that IRS had taken actions to address the critical success factors we identified and acknowledged that significant actions are yet to be done, referring to several key PCA program project plan steps that have not been completed. In response to our recommendation that, if Congress authorizes IRS’s use of PCAs, IRS do a study that compares the use of PCAs to a collection strategy that officials determine to be the most effective and efficient overall way of achieving collection goals, the Commissioner agreed that IRS would need to analyze the PCA program to determine its effectiveness and impact on the overall collection of delinquent taxes. He said that the detailed design for evaluating the PCA program will include a study to ensure that IRS is making the most effective and cost efficient use of total resources available. We are also sending copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director, Office of Management and Budget, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Thomas D. Short, Assistant Director. Appendix IV also lists major contributors to this report. If you have any questions about this report, contact me at brostekm@gao.gov or Tom Short at shortt@gao.gov, or either of us at (202) 512-9110. Figure 2 below shows the annual gap between the number of cases assigned to field and telephone collections and the number of delinquent accounts worked to closure (excluding accounts for which collection workload was deferred) expressed as a percentage of the number of cases assigned. The following appendix provides some detail on various IRS actions to address the critical success factors. Critical Success Factor—Results Orientation: IRS envisions that the PCA program will meet the following goals: Increase the collection of tax debts by $9.2 billion. Increase the closure of tax debt cases by 17 million taxpayers. Reduce the tax debt backlog; and Increase taxpayer satisfaction by 12.5 percent. To motivate PCAs to achieve these results, IRS is devising a balanced set of measures--the “balanced scorecard”--and a related performance-based compensation system. The performance scores on these measures also are to be used in determining financial bonuses and future case allocations to PCAs. Specifically, PCAs with above-average performance scores are to be eligible for monetary bonuses if they meet minimum thresholds for five of six performance measures. Also, the performance score is to be translated into a value for each PCA that is to be used to determine a proportionate allocation of cases for the next quarter. IRS’s intent is that the balanced scorecard will ensure that collection efforts are balanced appropriately in providing quality service; ensuring adherence to taxpayer rights; and complying with IRS policies, procedures, and regulations. The performance measures are to include the following. Collection effectiveness: Dollars collected as a percentage of dollars assigned to be collected over the contract period. Case resolution: Resolving cases assigned through the payment of the tax debts immediately or through installment payments over 3 years, identification of bankrupt or deceased taxpayers, or identification of hardships that affect the taxpayers’ ability to pay. Taxpayer satisfaction: Satisfaction will be measured through random surveys of taxpayers on the accuracy and quality of actions taken by PCA employees and their adherence to various standards, and through taxpayer complaints. PCA employee satisfaction: Satisfaction will be measured through surveys of employees and their retention rates. Work quality: Quality will be measured through audits of PCA cases and telephone monitoring of interactions with taxpayers. Validated taxpayer complaints: Financial penalties will be assessed and points will be subtracted from PCA performance scores if taxpayer complaints are validated. Critical Success Factor—Agency Resources: IRS has set up an infrastructure to administer the PCA program, oversee PCA contractors, and work on cases referred back to IRS from PCAs. IRS has identified initial staffing needs for the PCA program. IRS has estimated that 100 full-time equivalency positions (FTE) will be needed to initially staff the three elements of the program. IRS estimates that it will need 30 FTEs to administer the program and do oversight, and 70 FTEs to work on the cases referred back to IRS from PCAs for the first round of PCAs selected to work on cases. As IRS learns about its staffing needs and sends cases to more PCAs over time, IRS plans to adjust its staffing accordingly. IRS has informed PCAs that the number of cases that they receive over a set time period is to be based on their performance scores against balanced measures. IRS plans to oversee the assigned workload to ensure that PCAs work on the full range of simpler cases. To motivate PCAs to work on the full range of cases, IRS plans to measure, among other things, the extent to which PCAs resolve cases sent to them, including those that PCAs refer back to IRS without resolving the tax debt. IRS also is working on systems to help it identify the best cases to send to PCAs and to help it transmit and manage those cases. Critical Success Factor—Taxpayer Issues: IRS has drafted provisions to ensure that PCAs know that they have to treat taxpayers properly and make them aware of the consequences of not treating taxpayers properly. Proper treatment of taxpayers is one of the performance measures used to determine a performance score for use in granting monetary bonuses and case allocations for PCAs. The following provides examples of the draft provisions on proper taxpayer treatment. PCAs shall comply with all applicable federal and state laws. The principal federal statues and regulations currently governing collection activities are to be followed. Further, IRS plans to monitor PCA collection activities and treatment of taxpayers; any behavior that is not in conformance with cited federal and state laws and regulations will be considered a breach of contract. IRS has informed PCAs that it will be conducting customer satisfaction surveys and that customer satisfaction is one of the key components of the balanced scorecard to be used to determine financial bonuses and future case allocation. IRS plans to require that PCAs inform taxpayers orally and in writing on how to report improper treatment by PCA employees to IRS. IRS has established preliminary plans for monitoring and measuring PCA performance through such means as conducting site visits and compensating PCAs according to their performance reflected in the balanced measures scorecard. However, IRS has deferred doing much work on evaluating program performance overall given the other work that had to be done and the resources that were available. In addition to those named above, Evan Gilman, Ronald Jones, John Lesser, Cheryl Peterson, and Jim Wozny made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Congress is considering legislation to authorize IRS to contract with private collection agencies (PCA) and to pay them out of the tax revenue that they collect. Some have expressed concerns that this proposal might be unsuccessful, inefficient, or result in taxpayers being mistreated or having their private tax information compromised. This report discusses (1) the critical success factors for contracting with PCAs for tax debt collection; (2) IRS's actions to address these factors in developing the PCA program and actions left to be done; and (3) whether IRS, if it receives the authority to use PCAs, plans to do a study that will help policy makers judge whether PCAs are the best use of funds to meet IRS's collection objectives. Based on our analysis of information from various parties, including officials from selected state revenue departments and federal agencies that use PCAs, five factors are critical to the success of a PCA collection program. Together, these factors increase the chances for success and help the program achieve desired results. Although incomplete, IRS has taken actions to address these factors. For example, IRS has been developing (1) program performance measures and goals, (2) plans for a computer system to transmit data to PCAs, (3) a method to select cases for PCAs, and (4) contract provisions to govern data security and PCAs' interactions with taxpayers. IRS officials recognize that major development work remains and have plans to finish it. Officials said they would suspend work if PCA authorizing legislation is not passed during 2004. If legislation passes, officials estimated that it would take 18 to 24 months to send the first cases to PCAs. Aware of concerns about the efficiency of using PCAs, IRS intends to study the relative performance of PCAs and IRS employees in collecting tax debts after gaining some experience with them. However, the initial idea for a study would provide limited information to judge whether or when the PCA approach is the best use of resources. The tentative idea--comparing PCA and IRS performance for the same type of simpler cases to be sent to PCAs--does not recognize that IRS officials believe that using IRS employees on such cases would not be the best use of staff. Federal guidance emphasizes efficiently and effectively using resources to achieve results and identifying the most realistic and cost-effective program option. Experience gained in using PCAs and a new IRS case selection process would help officials design such a study.
Most of the diseases treated by stem cell transplantation involve abnormalities of the blood, metabolic, or immune systems. These diseases include several forms of cancer as well as certain nonmalignant diseases. They strike all races, although one racial group or another may have a higher incidence rate for a particular disease. Not all patients with diseases that may be cured by stem cell transplants necessarily pursue them. Depending on a number of donor and patient characteristics, from about 10 to 50 percent of patients are alive 5 years after transplants. The patients who do not survive may succumb either to their diseases or to the consequences of transplantation. Because of these low survival rates, some patients and physicians may be reluctant to select this stressful treatment under most or all circumstances. For most of the diseases involved, other therapies are available that may be less invasive, carry lower risk, or be the medically preferred initial treatment. Nevertheless, some of these diseases are best treated by stem cell transplantation, either initially or after other treatments have failed. Prior to stem cell transplantation, the patient’s bone marrow and, consequently, immune system are destroyed with radiation or chemotherapy. The patient’s bloodstream is then infused with healthy stem cells from a donor. Healthy stem cells can be therapeutic because they can develop into all the components of blood, including those needed to replace the patient’s immune system. In an “autologous” transplant, these cells come from the patient’s own marrow. In a “syngeneic” transplant, the cells come from an identical twin. For many diseases, the most common type of transplant is an “allogeneic” transplant, which consists of stem cells from a genetically compatible donor. Although bone marrow was initially the only source of stem cells for transplantation, in recent years two other sources of stem cells, umbilical cord blood and peripheral blood stem cells (PBSC), have also been used. In 2001, 1,215 of the transplants facilitated by NMDP (70 percent) involved marrow, 42 (2 percent) involved cord blood, and 491 (28 percent) involved PBSC. Umbilical cord blood is collected from the placenta and umbilical cord of a newborn and then preserved in a cord blood bank until needed by a matched patient. The number of stem cells typically obtained from cord blood is relatively small but is often adequate for pediatric patients. For transplantation from cord blood, the blood is volunteered when the blood is banked, not when it is used. The Registry began an umbilical cord blood stem cell program in 1998. Stem cells from peripheral blood may be obtained in numbers sufficient for transplantation when the donor is treated with a drug that causes the cells to leave the marrow and enter the bloodstream where they can be extracted using a process where the stems cells are removed and the remaining components of the blood are returned to the donor. A donor, matched to a patient, may be asked to donate either bone marrow or PBSC, depending on the preference of the patient’s physician. The Registry has offered PBSC to patients since 1999. In addition to its dependence on such common determinants of treatment success as patient age and disease severity, the outcome of a transplant depends on the degree of match between donor and patient with respect to particular blood cell proteins—the human leukocyte antigens (HLA)— that are part of a person’s genetic makeup. Each person has three primary pairs (one set of three from each parent) of these antigens that play a major role in the compatibility of a transplant. A matched donor is defined as one for whom each of these six antigens has the same kind of HLA. If a matched donor cannot be found, then a donor with certain types of mismatch may be used, depending on the transplant center’s preferences, although usually with poorer results. In general, the more closely related two people are, the more likely it is that their HLA will match. At one extreme, identical twins always match, and, in fact, match on all antigens, not just the six ordinarily focused upon. At the other extreme, members of separate racial groups are relatively unlikely to match one another. Full siblings can provide a six out of six match, resulting in what is called an “HLA-identical sibling transplant,” but only about 30 to 40 percent of patients can be expected to have a matched sibling donor. As a result, unrelated donors with matched HLA are sought from the registries in which their HLA type has been recorded. The definition of a match has been refined over time as scientific understanding of HLA increases. HLA are being typed more precisely, so more types of HLA can now be distinguished. Thus, some of today’s matches may be judged as mismatches in the future because better matches are possible. This increasing refinement does not mean, however, that finding a suitable match for transplantation is inevitably becoming more difficult. Some kinds of mismatch may be less dangerous than others. As a result, as research continues, there may be fewer matches by today’s standards, but relatively harmless mismatches will be recognized as such and used. Further, there is evidence that cord blood may not require as exact an HLA match as is usually sought. In support of the Registry, NMDP manages a worldwide network consisting of more than 400 donor centers, recruitment groups, contract laboratories where tissue is typed, apheresis centers, cord blood banks, collection centers where marrow is harvested, blood sample repositories, and transplant centers. More than half of these organizations are donor (91) or transplant centers (149). The relationship of these network components to NMDP varies. Some, such as the recruitment groups, were designed to be parts of the network and work with NMDP, whereas others, such as the transplant centers, exist separately from the network and function independently of NMDP except where specified by contract. The NMDP network includes donor centers and other organizations in foreign countries. The foreign donor centers merge their files with the Registry, contributing more than one million donors. These centers are required to comply with NMDP policies, program standards, and other criteria, although fees for recruiting donors and other financial incentives and payments that go to U.S. centers are not paid to foreign centers. NMDP has also signed cooperative agreements with national registries in 13 foreign countries. Although certain data on donors recruited into these registries are not entered into the Registry’s computer system, these foreign registries will search their donor files on behalf of a U.S. patient searching the Registry. In addition, 6 foreign apheresis centers, 18 foreign bone marrow collection centers, and 36 foreign transplant centers are affiliated with the Registry. NMDP’s affiliations with foreign donor and transplant centers result in its facilitation of both foreign-to-U.S. and U.S.- to-foreign donations. The existence of these international affiliations with the Registry does not prevent U.S. transplant centers from obtaining stem cells through foreign registries directly, that is, without going through Registry channels. Even domestically, the Registry is not a monopoly; other U.S. registries also maintain lists of donors, conduct searches for stem cells, or perform both of these functions. These other registries, however, are relatively small; often specialize in donors from particular racial or ethnic groups; and are private, with no national requirements. The Registry serves two groups of people, donors and patients. The Registry’s donor centers and recruitment groups recruit donors, who are then managed by the donor centers. The Registry pays these centers and groups for signing up donors. In view of the past underrepresentation of minorities in the Registry, NMDP has initiated several recruitment efforts to increase its racial and ethnic diversity. For example, it provides free or low-cost minority-specific educational materials to donor centers and recruitment groups. Probably the most important aspects of managing donors are to maintain their commitment to donation so that they are locatable and willing to donate when their stem cells are requested, to keep records of how to contact them, and to drop from the list any individuals who are too old or no longer able or willing to donate. A patient’s first contact with the Registry occurs when his or her physician or a transplant center conducts a free, preliminary search of the Registry for stem cell donors and cord blood units. The preliminary search, which takes about 24 hours, produces a list of donors and cord blood units that are potentially suitable for that patient. However, many patients for whom such searches are conducted are not necessarily good candidates for stem cell transplants. For example, some searches may be conducted for patients who are too sick for transplantation or who are good candidates for less invasive therapies. If the physician and patient decide to continue a search for an unrelated donor (or unrelated cord blood) on the Registry, then more information about the matching stem cells is required and a formal search is begun. Only a physician affiliated with a transplant center in the NMDP network may conduct a formal search of the Registry. The Registry bills the transplant center a one-time activation fee of $600. It also bills the center for the cost of the four or five testing components of the search process, each of which costs more than $100. Since several donors may have to be tested before one is selected for the patient, these component charges may be made repeatedly, resulting in a search costing thousands of dollars to the transplant center, and more to the patient when the center adds its markups. Relatively few insurance plans pay for searches; however, plans often pay for the actual transplantation including the procurement of stem cells. The details of the formal search and the subsequent steps in the process possibly leading to transplantation depend on the additional information needed; the results of laboratory tests; and the kind of stem cells sought, whether stored blood from an umbilical cord or blood or marrow from a living donor. If a suitable donor or suitable cord blood unit is found, and if other requirements in the process toward transplantation are fulfilled, then either (1) the marrow is harvested from the donor at a collection center, (2) PBSC are collected from the donor at an apheresis center, or (3) the cord blood is shipped from a cord blood bank. The stem cells are transported to the transplant center, often by courier. The final step is the infusing of the patient’s bloodstream with the selected marrow, PBSC, or cord blood. The entire process—from the initiation of the formal search to the transplant (infusion)—typically requires many months and sometimes more than 1 year. However, some patients cannot wait this long for transplants because their medical conditions are deteriorating. During the search process, NMDP offers patient advocacy services through two channels. Its Office of Patient Advocacy (OPA) provides several services, including education, support, case management intervention, financial assistance, and special advocacy projects. For example, OPA publishes the Transplant Center Access Directory, a patient guide listing all transplant centers in the NMDP network. The directory describes each center’s HLA matching criteria and lists the diseases each typically treats with unrelated donor marrow transplants. The directory also provides information on comparable search charges and risk-adjusted patient survival data. In addition to the services provided through OPA, NMDP requires that each transplant center have a patient advocate on staff. The patient advocate must be familiar with the center’s transplant program and with issues of unrelated donor stem cell transplantation and must not be a member of the transplant team. A 1996 OIG review raised concerns about donor center costs and performance. Before the review, NMDP used two methods to finance donor centers. NMDP paid for services at some donor centers through cost-based contracts for direct expenses, such as labor and fringe benefits and donor expenses. Other donor centers received payments from NMDP for specified activities, such as donor recruitment and donor search activities. The OIG recommended that HRSA and NMDP develop a payment approach for all donor centers that more directly linked funding to performance and emphasize recruitment and retention of donors, particularly donors from racial and ethnic minority groups. Further, the OIG recommended that HRSA and NMDP develop procedures to monitor the performance of donor centers and other organizations in the NMDP network. The program’s recruitment efforts have apparently increased the number of donors on the Registry since 1998 for all racial and ethnic groups, and the theoretical probability of finding a match has increased steadily over the life of the Registry. By 2001, the number of donors from each minority group on the Registry had grown by at least 30 percent and was either greater than or no more than 2 percentage points below its representation in the general population. However, when viewed as a percentage of each group’s population, African Americans and Hispanics are still substantially underrepresented. For all racial and ethnic groups, the theoretical probability of finding a match has grown as the Registry size has increased, but equal access to a match may not be attainable. Differences among racial and ethnic groups in the rarity and variability of the genes responsible for compatibility in transplants may mean that the Registry cannot achieve equal probability for all groups. Further, the goal of equal access to a match conflicts to some extent with attempts to maximize the overall numbers of matches and transplants for the Registry. The size of the Registry has increased since 1998 by 36 percent, and no minority group increased by less than 30 percent. NMDP’s efforts to recruit minorities may have substantially increased the number of donors from these populations. Percentage increases for minorities ranged from 30 percent for Native Americans to 53 percent for Hispanics. Caucasian donors increased 28 percent. (See table 1.) The multiple race category had the largest increase, 123 percent, but this may result in part from an increase in the use of that category by those to whom it applies, rather than solely from an increase in the availability of donors of that group. The total of more than 1,000,000 minority donors listed in 2001 contrasts with the approximately 80,000 we reported in 1992. As can be seen in table 1, by 2001, the proportions of both African Americans and Hispanics on the Registry were within 2 percentage points of their proportions in the 2000 U.S. population. The proportions of other minorities on the Registry were either approximately equal to or exceeded their proportions in the population. While the differences between Registry and population levels of representation for African Americans and Hispanics reflect improved representation of these groups, the 2-percentage point differences still indicate a substantial underrepresentation in comparison with their proportions in the U.S. population. Specifically, in 1992, the proportions of African Americans and Hispanics, both at 4 percent of the Registry, were 8 and 5 percentage points lower, respectively, than their proportions in the U.S. population (which were 12 and 9 percent, respectively). This translated to a 67 percent underrepresentation for African Americans and a 56 percent underrepresentation for Hispanics. The current 2-percentage point differences on the Registry for these groups translate to a 17 percent underrepresentation for African Americans and a 15 percent underrepresentation for Hispanics. For all racial and ethnic groups the theoretical probability of a patient’s finding at least one matched donor has increased every year since 1988 but has leveled off somewhat since 1998. The increase in theoretical probability represents significant progress in raising the likelihood of a match. It reflects inclusion in the Registry of the most common genetic types over the period when the Registry was small and new, and recruitment efforts were beginning. The leveling off likely reflects the fact that for all groups, after years of recruitment activity, improvement now occurs mainly when rare types are added. (See fig. 1.) Nevertheless, the theoretical probability of finding a match varies by race, ranging in 2001 from under 60 percent for African Americans to over 80 percent for Caucasians. This probability has always been higher for Caucasian patients than for patients in any minority group, in part, perhaps, because of Caucasians’ greater numbers and level of representation on the Registry. The theoretical probability of finding a matched donor has been lowest for African American patients. This is because, in addition to their smaller numbers and lower level of representation on the Registry, their rarer and more varied HLA combinations make matching harder. Because of genetic differences among racial and ethnic groups, there is reason to believe that patients from some minority groups, notably African Americans, may never have the same probability of finding matches, and therefore of access to transplants, as Caucasian patients, regardless of the efforts made to recruit them. Any patient is more likely to find a match in his or her own racial and ethnic group than in another group, so patient matching rates depend, to some extent, on the number of people in the patient’s group on the Registry. All minorities are at a disadvantage for this reason. Further, some minority groups, such as African Americans, are known to have more rare and more varied HLA combinations than do Caucasians. The likelihood of finding a match from among a group of racially or ethnically defined donors declines with the rarity and number of possible genetic types found among the members of that group. In addition to these factors related to finding a match, there are other factors that may contribute to differences in access to a transplant. Some of these depend on the characteristics of those who volunteer for the Registry. For example, donors from different groups may differ in their tendency to be available (locatable, willing, and physically able) when called upon to actually donate. Other possible factors involve the attitudes, health, medical care, resources, and preferences of the patients. Patients of different groups may differ in their tendency to engage the health care system at all, to seek help early enough in their illnesses, or to search the Registry as opposed to pursuing other options. It may be possible to effect changes in these factors, thereby moving closer to the goal of equal opportunity for all racial and ethnic groups. However, not only is the goal of equal access to transplants for all groups difficult to attain, but it also may conflict with the statutory goal of maximizing the number of patients who find a match and thereby maximizing the number of transplants facilitated. Recruiting donors with the rare HLA combinations that may be needed for minorities is difficult. Large numbers of donors must be recruited and retained in the Registry in order to identify and add each rare genetic type to the donor pool, so the cost of recruiting such donorsthe incremental cost of adding these rare genetic types to the donor poolis large. Thus, devoting many resources in pursuit of a small number of rare genetic types may divert resources from other efforts, such as recruiting Caucasians and other groups with more common genetic types, which might more readily increase the number of matches. Because of the difficulty encountered in finding matches for minority patients, NMDP engages in a number of initiatives to increase the Registry’s diversity. It conducts outreach, recruitment, and educational efforts directed towards minorities. In addition, NMDP has initiated a program to pay the full costs of HLA tissue typing for minority donors. Although the difficulty in finding matches for minority patients may be unavoidable, it may be mitigated somewhat by the efforts of the Registry to increase the number of donors on whom it has complete HLA typing. The vast majority of actual donations are obtained from by donors whose HLA is fully typed. When only these donors are considered, each minority constitutes a larger portion of the Registry than its representation in the population. Therefore, because access to a match depends upon, for the most part, the fully typed donors on the Registry, access for minorities may be somewhat better than might be assumed by looking at the Registry as a whole. Although the exact number of patients in need of transplants from unrelated donors is not known, the number of patients utilizing the Registry to search for matches is about one-third of the estimated number of patients in need of unrelated donor transplants. About one-tenth of the number of patients estimated to be in need of unrelated donor transplants obtain transplants facilitated by NMDP. These figures suggest that the Registry may be underutilized for both searching and facilitating transplants. Physicians for approximately 15,000 U.S. patients requested preliminary searches of the Registry from 1997 through 2000. This number represents 34 percent of the 44,740 U.S. patients estimated to be in need of stem cell transplants from unrelated donors in that 4-year period. About 4,000, or 27 percent, of the patients whose physicians searched the Registry eventually received transplants facilitated by NMDP. However, a significant proportion of searches were not completed because stem cells were obtained from donors or organizations without the involvement of NMDP. From 1997 through 2000, physicians carried out preliminary searches for 34 percent of the number of U.S. patients estimated to be in need of transplantation from unrelated donors at any time during that period. The number of transplants facilitated by NMDP for all U.S. patients was 9 percent of the number estimated to be in need. The precise number of patients in need of unrelated donor transplants is not known. However, there is a greater than 10 to 1 ratio between the number of such patients estimated to be in need and the number of transplants facilitated by NMDP. This suggests that the Registry may be underutilized, as many more U.S. patients may need unrelated donor transplants than obtain them through the Registry. The ratio of the number of preliminary searches to the number of patients in need varied by race and ethnicity. Among specific racial and ethnic groups, the percentage of preliminary searches was highest for Caucasian patients (35 percent), and was lowest for Hispanic patients (24 percent) and Native American patients (24 percent). (See table 2.) We do not know why these apparent disparities in search rates exist. About one-fifth of the number of patients estimated to be in need formally searched the Registry (9,623 out of 44,740). Less than one-tenth of those estimated to be in need ultimately received NMDP-facilitated transplants. The numbers and percentages of preliminary searches that progressed to formal searches from 1997 through 2000 are presented by racial and ethnic group in table 2. The overall rate of progression from preliminary to formal search is 63 percent. Further, 4,056 of the 15,231 U.S. patients (27 percent) for whom preliminary searches were conducted from 1997 through 2000 eventually received NMDP-facilitated transplants. This number corresponds to 9 percent of the number of patients estimated to be in need of unrelated transplants during that period. Reasons for cancellation of preliminary searches or formal searches vary. Although clinical reasons, such as a change in medical condition, are the most commonly cited explanations for cancellation of both preliminary and formal searches, another relatively frequent reason is that stem cells are obtained from a provider other than NMDP, such as a related donor or another registry. (See tables 3 and 4.) We do not know the proportion of these cases that used a related donor, and some cases may not have been able to find a potential match at NMDP. However, it is likely that in at least some of these cases, NMDP might have facilitated a transplant if the patient’s transplant center had not selected another registry to provide the stem cells, thus representing another kind of possible underutilization of NMDP. Lack of donor availability—not finding any potential matches— and financial reasons are not commonly cited as reasons for cancellation of either kind of search, although it is possible that patients with limited financial resources or insurance may not be encouraged to make preliminary searches. Several factors may influence a decision to obtain stem cells from a provider outside the NMDP network, including the source of stem cells preferred by the physician, the costs involved, and the timeliness of the response. Outside providers may need to be used when the physician sees cord blood as a viable alternative source to bone marrow or PBSC because some cord blood banks do not list their cord blood units with NMDP. Search and procurement costs can also be a factor. Administrators of transplant centers that have done non-NMDP-affiliated transplants told us that other registries charge less for searches than NMDP does. For example, we were told that only a few other registries worldwide charge a search activation fee in addition to their charges for the specific medical procedures needed to confirm that a particular donor is healthy and matched to the patient. In addition, the cost of stem cell procurement at NMDP tends to be higher. One transplant center director told us that the center pays about $13,000 for stem cells obtained directly from overseas registries and about $21,000 for NMDP stem cells. However, even when NMDP is not paid for a formal search or for stem cells, it may still have been utilized. An official at NMDP informed us that it is possible for a transplant center to determine the NMDP-affiliated registry at which a foreign (but not domestic) potential match is registered on the basis of a preliminary search and to contact the foreign registry directly to obtain the stem cells. Moreover, that official stated that some transplant centers may do this regularly. Thus, although NMDP may not be recorded as having facilitated the transplants that result, its role in helping to locate donors in such cases means that its utilization is somewhat greater than the record suggests. Timeliness can be another factor. A few center administrators mentioned that NMDP takes longer to provide stem cells than do other registries. For example, one administrator told us that the time it takes to obtain a donor sample for testing at the transplant center—an important component of the overall search process—can be a week longer for NMDP than for a foreign registry, depending on whether NMDP judges the search to be urgent. Waiting this additional week can be frustrating for those at the transplant center who are anxious to determine whether they have a confirmed match or will have to continue searching. Another director told us that stem cells from non-NMDP providers are more likely to be received by the date the transplant center requests them than are stem cells from NMDP. NMDP has attempted to shorten its time from formal search initiation to transplant and reports that its median time has decreased from 4.8 months from 1992 through 1993 to 3.7 months in 2000. The optimal time frames for patients vary. Some may not be urgent, but NMDP has shown that it is possible to complete urgent searches in less than a month and reports that it expects to begin offering urgent searches as an option to transplant centers. Organizations that participate in the NMDP network generally comply with the standards and procedures it has established. In order to encourage adherence, NMDP uses various mechanisms to monitor compliance and performance. These include site visits, the Continuous Process Improvement (CPI) program, and incident reports, as well as a financial incentive system designed to improve the performance of donor centers. The results of the selected site visits, analysis of CPI measures, and incident report summaries we reviewed show that the organizations in the NMDP network generally adhere to NMDP’s standards and procedures. In general, NMDP ensures compliance by taking action against noncompliant organizations. (See app. II for examples of how NMDP uses these systems to achieve compliance with respect to selected activities.) In 2001, NMDP required 24 donor and transplant centers to take corrective actions because they did not meet its standards. The incentive system encourages compliance by linking donor center reimbursement to performance. NMDP uses several mechanisms to encourage the compliance and performance of the participating organizations in its network. NMDP staff members conduct site visits to donor centers to monitor the centers’ compliance with NMDP’s standards and procedures and to provide feedback about the results. It also employs the CPI program to assess and provide feedback at donor, transplant, and bone marrow collection centers. Further, NMDP monitors incident reports from donor, transplant, and collection centers and may take corrective action including, in serious cases, suspension or termination. According to NMDP officials, NMDP staff members conduct site visits at donor centers approximately every 2 years to assess donor center compliance with program standards and procedures. NMDP staff members review the organization of the program (such as its support and staffing structure), recruitment activities (such as performance against goals and donor drive compliance), donor management activities (such as management of patient-related donor search requests, confidentiality procedures, and records management), and billing and reimbursement to determine adherence to NMDP’s standards and procedures. They also compare performance against goals for various recruitment activities. Upon completion of these visits, NMDP staff members discuss the results with the center staff and provide a summary report. Centers that are noncompliant are advised of the problems and are required to submit corrective action plans to NMDP that address the problems. Our review of donor center site visit reports indicates that the reports identified problems and the corrective actions required of the centers to meet NMDP criteria. Since 1998, NMDP has conducted additional site visits at transplant centers to verify the accuracy of the data that the transplant centers submit electronically to NMDP. NMDP staff members compare the data from the centers’ records with the data from NMDP’s computer system. During these visits, NMDP staff members may also review other activities, such as the signing of patient consent forms. The site visits are scheduled for each transplant center every 4 years. NMDP plans to issue its first annual report on the results of the first cycle of site visits in September 2002. NMDP monitors the operations and performance of its centers through the CPI program. The program includes nine goals to increase the efficiency of key activities in the search and donation process and measures performance against these goals. For example, at donor centers, NMDP measures the timeliness of registering new donors, resolving search- related requests, and processing requests for HLA blood typing. At transplant centers, NMDP measures the time it takes to resolve and report confirmatory testing results. NMDP also monitors post-transplant data submission through CPI. These outcome data are used in research studies to analyze outcomes for donors and patients. NMDP also monitors the accuracy and timeliness with which donor and transplant centers submit donor and patient blood samples to NMDP’s research repository. NMDP provides regular feedback to donor and transplant centers concerning their performance on CPI measures. For example, each center receives a monthly report summarizing the results of its activities, along with those of all other centers, in the previous month. The reports allow centers to analyze how consistently they perform and to compare their results to those of other centers in the network. NMDP also conducts a year-end analysis to provide feedback to centers. Through its CPI program, NMDP monitors whether organizations in its network meet goals for timeliness and may recommend corrective actions for centers that do not meet these goals. A year-end analysis of the CPI program shows that during 2001 almost half (44 of 91) of donor centers met all nine CPI goals for the search process. In addition, 20 more donor centers met eight of nine goals, and 9 others met seven of nine goals. According to NMDP, the remaining 18 donor centers (20 percent) that met six or fewer goals were the focus of technical assistance to improve their performance. Our analysis shows that 5 of the 91 donor centers (5 percent) were placed on review or probation for failing to meet CPI goals in 2001. Our analysis also shows that NMDP placed 18 of the 129 transplant centers (14 percent) on probation. Eight of these were placed on probation for failure to meet CPI goals for the search process, seven for failure to meet CPI measures concerned with timely submission of recipient follow-up information, and three for problems related to the accuracy and timeliness of submissions of donor and patient research blood samples. NMDP supplements these activities with incident reports, which are written accounts of deviations from policies and standards that are categorized by the nature of a deviation and include, but are not limited to, categories such as confidentiality concerns, customer service, and product transport. NMDP uses incident reports to track deviations from its standards by recording the specifics of incidents. NMDP staff members follow up and investigate incidents. In addition, an NMDP committee reviews a summary report of incidents twice a year to identify developing trends that may affect an individual center or the entire network. Since NMDP reviews center participation annually, the committee may follow up on deviations from NMDP’s standards or take action such as probation, suspension, or termination during the reapplication process. We reviewed a summary of incidents categorized by type of problem and the corrective actions taken to resolve them. For example, one incident involved an operating room staff member administering less appropriate blood, rather than the donor’s own blood, which was available for that purpose, during a marrow harvest. NMDP monitored an investigation at the hospital to ensure that the problem would be addressed. To improve the operation of its donor centers, NMDP ties their reimbursement to their performance. In 1997, NMDP instituted a new reimbursement system that links payment to performance on CPI goals for all donor centers. NMDP pays donor centers a fee for each activity to recruit donors for the Registry, such as signing up donors, typing their tissues, maintaining their files, and other activities related to confirming that the donors identified as potential matches for a searching patient actually match and are medically cleared for donation. NMDP pays each donor center a recruitment fee of $28 and $10 for every minority and Caucasian donor, respectively, recruited up to the number specified in its recruitment goal. NMDP establishes annual recruitment goals for each donor center based on the demographics of the local population. When donors are recruited, the donor centers that do not register a specific percentage of the new donors within a certain period incur financial penalties. For example, the CPI goal for registering new donors is to register at least 85 percent of them within 35 days of the date on which they volunteer. NMDP would reduce the total recruitment fee it pays to donor centers that register less than 85 percent of new donors within this time frame. NMDP data show that in May 2001, 98 percent of all donor centers met this goal. In addition, NMDP pays incentives to donor centers for retaining donors at various points in the donation process. In spite of progress in recruiting minority donors, racial and ethnic disparities in the Registry remain, due in part to differences in the genetic variability within groups. Thus, differences among racial and ethnic groups in the probability of obtaining transplants will likely continue. Many in need of transplants may not search the Registry; those that do often do not obtain them, and for those that obtain them, the transplants may not be facilitated by NMDP. Although NMDP enhances the quality of its network by actively monitoring the compliance and performance of the component organizations, it has not attained the level of utilization that might be expected. In its written comments on a draft of this report, HRSA stated that the report provides an accurate and helpful overview of the status of the National Bone Marrow Donor Registry. HRSA agreed that recruitment of donors cannot be the sole strategy for improving access to unrelated donor transplants for minority patients or those with unusual antigens, and cited the need for other efforts to supplement recruitment activities. However, HRSA noted that the Registry consists of two distinct groups of donors, those who are fully HLA typed and those who are less than fully typed. Since the vast majority of actual donors are selected from the fully typed portion, minority racial and ethnic groups therefore make up a larger proportion of the Registry than their representation in the U.S. population. We have noted in the report that, because of this, access for minorities may be somewhat better than might be assumed by looking at the Registry as a whole. With regard to underutilization of the Registry, HRSA agreed that many patients who could benefit from unrelated donor transplants never consult the Registry or do so too late in the course of their illnesses. HRSA suggested a slightly modified method for estimating the number of patients in need. We modified table 2 in accordance with its suggestions, but note that both approaches produce virtually identical estimates of overall utilization. (See app. I.) Finally, HRSA noted that many factors affect the time required to complete a search of the Registry. While searches frequently take many months and the median search time has decreased, NMDP has completed medically urgent searches in less than a month, on a pilot basis, and reports that it expects to begin offering urgent searches as an option to transplant centers. We have revised the report to include this clarification. HRSA also provided technical comments, which we incorporated as appropriate. HRSA’s comments are reprinted in appendix III. We are sending this report to the Administrator of HRSA, the NMDP Chief Executive Officer, and other interested persons. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please call me at (202) 512-7119. Key contributors to this assignment are listed in appendix IV. Registry utilization is the extent to which patients in need of unrelated stem cell transplants search the Registry or obtain NMDP-facilitated transplants. In determining utilization, it is necessary to use indirect methods to calculate the number of patients in need because it is impossible to determine this number directly. For example, although we may be able to obtain data on the number of patients who have been diagnosed with certain blood and immune system diseases, we are unable to determine the number for whom stem cell transplants are the best treatment. One measure of the utilization of the Registry is the extent to which the number of patients obtaining transplants facilitated by the Registry is as high as it could be. The maximum possible utilization of the Registry would be indicated if the number of U.S. patients conducting preliminary searches was approximately equal to the estimated number of patients needing unrelated donor transplants. A second measure of utilization is the extent to which patients search the Registry. The method we used to assess the two aspects of utilization—searching the Registry and obtaining an NMDP-facilitated transplant—is also used by NMDP. It involves estimating the number of patients in need of unrelated donor transplants by using data on the number of HLA-identical sibling transplants obtained from IBMTR. This method and two alternative methods that are also used by NMDP to assess utilization by U.S. patients, one based on the number of preliminary searches conducted and the other based on the incidence of disease, are described here. For the years from 1997 through 2000, we estimated the number of Caucasian patients in need of unrelated donor transplants based on the average annual number of Caucasian HLA-identical sibling transplants performed during those years. To obtain this estimate, we multiplied the number of HLA-identical sibling transplants, for Caucasians, by the number of patients of that group that genetic theory predicts—on the basis of the average number of children born to the women of that group—are in need of unrelated donor transplants for every Caucasian HLA-identical sibling transplant in the United States. The average number of children born to Caucasian women over a lifetime during the years from 1989 through 1995 was 1.7925. Subtracting the individual who is in need of a transplant gives n = 0.7925 as the number of siblings available to be transplant donors. The likelihood of a match between two siblings is 25 percent because each child inherits one-half of each parent’s HLA genes, resulting in a one out of four chance of having the same HLA genes as a sibling has. Therefore, the probability that no sibling HLA identically matches the one in need is P = (0.75) . For a Caucasian patient, P = (0.75)0.7925 = 0.796134. The number of patients in need of unrelated stem cell transplants is equal to the number of sibling donor transplants multiplied by P/(1 − P). Therefore, for every HLA-identical sibling transplant recorded for a Caucasian patient, there will be 0.796134/(1 − 0.796134) = 3.90518 patients in need of unrelated donor transplants. Because there were 7,920 sibling transplants performed for Caucasian patients from 1997 through 2000, we estimate that 3.90518(7,920) = 30,929 Caucasian patients were in need of stem cell transplants during that period. The estimates for other racial and ethnic groups are presented in table 2. Because minorities generally have less access to health care and may therefore have less access to sibling transplants specifically, these estimates were obtained by assuming that each minority group’s need for unrelated donor transplants is proportional to the Caucasian group’s need. The estimates were obtained by multiplying the number of persons in the minority group by the proportion of Caucasians in need of unrelated donor transplants. This approach implicitly assumes that differences across groups in fertility rates are of negligible importance in computing the numbers of patients in need of unrelated donor transplants. An alternative approach assumes that minorities and Caucasians have equal access to HLA-identical sibling transplants. Based on this assumption, this approach derives the needs of minorities for unrelated donor transplants directly from their observed numbers of HLA-identical sibling transplants. In doing so, it allows for the possibility that each group has its own disease incidence rates and that the differences among groups in their relative levels of sibling donations reflect these rates, not differences in access. (See table 5.) This approach, while utilizing somewhat different assumptions from the method above, produces a virtually identical estimate of the underutilization of the Registry (10 percent versus 9 percent). The second method used by NMDP to assess Registry utilization is based simply on the annual number of patients conducting preliminary searches. In order to use this method, one must assume that this number directly represents those in need of unrelated donor transplants. One cannot assess the extent to which those in need search the Registry on the basis of this number since the number itself is the number of patients searching. However, one can assess the extent to which those in need obtain NMDP- facilitated transplants by considering the annual percentage of patient searches that result in NMDP-facilitated transplants. This method yields an estimate of the patients searching who obtain NMDP-facilitated transplants of 27 percent. (See table 5.) Although this approach has been used by NMDP as a way of assessing utilization, officials at NMDP observe that the validity of this approach to utilization assessment is limited by the freedom with which patients can choose whether to search. These officials point out that preliminary searches are performed for some patients who are not good candidates for transplant and that other patients who should submit preliminary searches probably do not. Because of the lack of correspondence between the number of patients in need and the number performing preliminary searches, this estimate is not likely to be as accurate as the other two. The third method used by NMDP is based on an estimate of the annual number of U.S. patients newly diagnosed from 1997 through 2000 with selected diseases that might benefit from unrelated stem cell transplants. The estimated number of potential recipients for each disease is obtained from disease incidence estimates, with adjustments for the likelihood that (1) the patient is young enough to benefit from transplantation, (2) disease severity is not so great as to make transplantation futile, and (3) an HLA- identical sibling donor is available, thereby making unrelated donor transplant unnecessary. The ratio of the annual number of NMDP- facilitated transplants for U.S. patients diagnosed with these selected diseases during this period to the estimated number of new U.S. patients with the diseases is used to assess utilization. (See table 6.) The ratio, for all patients with the selected diseases, corresponds to an estimated percentage of candidates obtaining transplants—10 percent—that is very close to the estimate obtained by the first method. The validity of this third method is constrained by the limited number of diseases for which data are available. NMDP requires that the organizations participating in its network comply with its standards and procedures. This appendix discusses how NMDP achieves the compliance by network organizations with standards and procedures for obtaining the informed consent of donors and patients, donor selection criteria, confidentiality of records, collection and transportation of marrow, laboratory standards, and maintenance of donor files in the Registry. At each stage of the search process, NMDP requires donors to sign informed consent statements for procedures performed at the donor and transplant centers. A volunteer must sign an informed consent form before being listed as a donor on the Registry, and also before the collection of blood for initial and follow-up testing, infectious disease testing, and participation in research. In addition, consent must be obtained before notifying the transplant center that a donor is willing to proceed to marrow donation and before the administration of anesthesia. Consent must also be obtained before collecting blood specimens for research and before any proposed procedure for which the donor has not previously given consent. According to NMDP officials, during each donor center site visit, NMDP staff members review about 35 randomly selected donor files. NMDP staff members check that each donor has signed all appropriate consent forms for the stages of the recruitment and search process the donor has completed. According to an NMDP official, since NMDP began performing site visits in 1998, missing or unsigned donor consent forms occurred in only a few cases, indicating that a high level of compliance has been achieved. The number of missing consent forms is not readily available because cumulative data are not permanently stored. Transplant centers are responsible for obtaining informed consent from each transplant patient, for collecting research blood samples that are sent to the NMDP repository, and for submitting baseline and follow-up data to the Registry. Some of the centers have separate consent forms specifically for the research samples and clinical data, whereas others incorporate consent for the research samples and clinical data into the informed consent document the patient signs for the transplant. NMDP is currently collecting information on how transplant centers are handling the informed consent process for the research samples and clinical data submitted to NMDP. This information will be analyzed, and NMDP will evaluate whether changes in policies or procedures should be made to the consent process for obtaining NMDP data and research blood samples. In order to be considered for stem cell donation, donors must be from age 18 through 60 and in good health. Individuals with serious illness or those who are significantly overweight are disqualified. The donor must provide a medical history and acknowledge in writing that the history is accurate. Pertinent donor medical information is evaluated for acceptance or deferral according to NMDP medical eligibility standards and criteria set by the medical director at the local donor center. NMDP monitors whether registered donors have filled out the appropriate medical history questionnaires, but NMDP does not store cumulative data on the number of missing medical history questionnaires. During each donor center site visit, NMDP staff members check a random number of health history questionnaires. However, NMDP is limited in how it monitors the donor selection process. Although NMDP tracks the number of donors who are unavailable for medical reasons, it cannot determine whether an unavailable donor’s medical condition was preexisting, and therefore should have been caught in the health screening at the time the donor volunteered, or whether the donor’s health changed during the period between registration and a request for testing prior to donation. NMDP requires that each participating donor center have a system for safeguarding donor confidentiality. The Registry identifies donors by code number only. Donor centers maintain donor identity and location and limit access to this information by using locked file cabinets and locked rooms. NMDP also requires that each participating transplant center have a system of confidentiality in place to protect the privacy of patients. It provides that transplant patient identification should not appear on papers or publications, and the patient’s name and location should not be disclosed to the donor(s). Organizations responsible for marrow collection and transport must meet certain participation criteria in order to be affiliated with NMDP. Among other things, participating cord blood banks must be accredited and licensed or registered by the Food and Drug Administration for collection of autologous blood. Marrow collection centers must provide emergency and intensive care services and must be accredited by the Joint Commission on Accreditation of Healthcare Organizations. In addition, each collection center must have a licensed medical director, an experienced marrow collection team that regularly collects bone marrow, and a designated site for management of collection activities. NMDP has established standards to ensure the proper collection and transportation of marrow. These require that bone marrow collection centers have experienced personnel to collect marrow and adequate resources to support collection and management activities. In addition, NMDP requires that collection centers maintain written standard operating procedures and policies for collecting, testing, labeling, and transporting marrow. Laboratories responsible for HLA tissue typing must meet certain criteria in order to be affiliated with NMDP. Participating HLA typing laboratories must be accredited by the American Society for Histocompatibility and Immunogenetics (ASHI) or the European Foundation for Immunogenetics for techniques required by NMDP. Laboratories must also comply with all state and federal regulations, including the Clinical Laboratory Improvements Amendments of 1988 (or their non-U.S. equivalent) for infectious disease testing, blood typing, red cell antibody screening, and other tests required by NMDP. As part of NMDP’s quality control program, participating laboratories must type blind samples provided by NMDP. The laboratories must maintain monthly error rates less than or equal to 1.5 percent. If a laboratory fails to meet quality control and quality assurance standards established by ASHI or NMDP, NMDP may require that laboratory to submit a corrective action plan. After the period allowed for corrective action, the laboratory’s contract with NMDP may be terminated if it still does not meet the standards. From February 2000 through April 2002, NMDP suspended five laboratories responsible for HLA tissue typing. The length of suspension ranged from 1 to 9 weeks, and reasons for suspension were related to electronic communication problems, overdue samples, and poor turnaround time. NMDP’s central database is updated when new donors are recruited and when information on existing donors changes or donors are deleted from the Registry. Information about newly recruited donors includes donor identification numbers, demographic data, and the donors’ HLA types. According to NMDP procedures, domestic donor centers submit data on donors daily through NMDP’s central database. The following staff members made important contributions to this work: Donna Bulvin, Charles Davenport, Donald Keller, Kelly Klemstine, Behn Miller, and Roseanne Price.
More than 30,000 people are diagnosed annually with leukemia or other blood, metabolic, or immune system disorders, many of whom may die without stem cell transplants, using stem cells from bone marrow or another source. When a patient needs a transplant of donated stem cells and no genetically compatible related donor is available, the National Bone Marrow Donor Registry may help the patient search for compatible stem cells from unrelated donors. The National Bone Marrow Registry Reauthorization Act of 1998 required, among other things, that the Registry carry out a donor recruitment program giving priority to minority and underrepresented donor populations, ensure efficiency of operations, and verify compliance with standards by organizations that participate in the Registry. From 1998, when the National Bone Marrow Registry Reauthorization Act was enacted, through 2001, the number of stem cell donors on the Registry increased for all racial and ethnic groups. Although the exact number of patients in need of transplants is not known, estimates suggest that about one-third of them use the Registry to search for donors. The organizations that are involved in transplantation and participate in the National Marrow Donor Program (NMDP) network generally adhere to NMDP's standards and procedures. In 2001, NMDP required 24 centers to take corrective actions because they did not meet its standards.
The growth in state prekindergarten programs has occurred for various reasons, but three frequently cited reasons are (1) evidence of the importance of early childhood to later development, (2) the high rate of labor force participation by mothers of young children, and (3) increased concern over school readiness and subsequent achievement. Much has been discovered about children’s ability to learn more at an earlier age than previously believed. The early childhood years are commonly portrayed as formative. Between the first day of life and the first day of kindergarten, development proceeds at a pace exceeding that of any subsequent stage of life. Children from birth to age five engage in making sense of the world on many levels: language, human interactions, counting and quantification, spatial reasoning, physical causality, problem solving, and categorization. Since the 1960s, the percentage of women in the labor force has increased dramatically. In 1960, about 36 percent of women participated in the labor force, and by 2000 this figure had increased to 58 percent. Moreover, in 2003, about 69 percent of women with children aged three to five (but none younger) were in the labor force. This high rate of participation of women in the labor force has resulted in more children enrolled in preschool programs of varying quality and pressure being placed on schools to provide before- and after-school programs. To improve educational achievement for all children and reduce failure in lower grades, many states and school districts are placing a greater emphasis on the school readiness of younger children. For nearly a quarter century, many states have developed or expanded their investment in prekindergarten programs to increase the likelihood of children’s success in school. Prior to 1970 only 7 states funded preschool programs, and by 1988, 28 states had programs and total spending was $190 million; such programs generally targeted economically and educationally disadvantaged children. While most state-sponsored prekindergarten programs continue to serve such children, a few states are in the process of expanding their programs to include all four-year-olds, regardless of family income. According to the National Institute for Early Education Research (NIEER), 40 states (and Washington, D.C.) had some form of state-sponsored prekindergarten program in the 2001-02 school year and enrolled over 700,000 children, mostly four-year-olds. While states spent more than $2.4 billion for prekindergarten during the 2001-02 school year, 10 states accounted for over 80 percent of this amount. Generally, prekindergarten programs aimed to serve four-year-olds, but according to NIEER estimates, most states served less than one-fifth of all their four-year-olds (see fig. 1). NIEER estimated that about 80 percent of children served by state prekindergarten programs were four-year-olds. During the 2001-02 school year, only two states (Georgia and Oklahoma) enrolled more than 50 percent of their four-year-olds in a state-sponsored prekindergarten program. Ten states (Alaska, Idaho, Indiana, Mississippi, Montana, New Hampshire, North Dakota, South Dakota, Utah, and Wyoming) had not initiated prekindergarten programs. The NIEER study also reported on characteristics describing the quality of states’ prekindergarten programs. Table 1 provides information on NIEER’s findings related to certain program characteristics, benchmarks, and the number of state programs meeting the benchmarks that NIEER associated with quality prekindergarten programs. Among the 10 largest state prekindergarten programs, most met the benchmarks for class size, family support services, staff-child ratio, and teacher qualifications, and they were equally divided with respect to comprehensive curriculum standards. State-sponsored prekindergarten programs are expanding alongside existing programs for young children, including Head Start, Title I, and private child care programs. Head Start is a targeted program that mostly serves children from low-income families. Head Start, administered at the federal level by the Department of Health and Human Services (HHS), is implemented in local communities through grantees. These grantees include community action agencies, school systems, for-profit and nonprofit organizations, other government agencies, and tribal governments or associations. The Department of Health and Human Services reported that Head Start served just over 900,000 children nationwide during the 2003 fiscal year, and most were aged three and four; Head Start was funded at about $6.7 billion, or about $7,366 per child. In addition to administering Head Start, the federal government also provides some limited support for early education programs through Title I. Administered by the Department of Education (Education), Title I is the single largest federal investment for elementary and secondary education. Its primary purpose is to help local education agencies and schools improve the teaching and learning of children who are failing, or are most at risk of failing, to meet challenging academic standards. In support of that goal, Education reported that Title I was funded at about $11.7 billion during the 2003 fiscal year. Nearly 15 million students were supported by Title I funds, and of these, about 2 percent (an estimated 313,000) were enrolled in prekindergarten programs during the 1999-2000 school year. The Child Care and Development Fund (CCDF) is the principal federal program that supports child care for low-income families. CCDF is administered by HHS, and each state receives an annual allocation that is used to subsidize the child care expenses of low-income families with children generally under age 13. CCDF subsidies can be used to obtain child care from various types of providers, including child care centers and family homes. In fiscal year 2002, CCDF was appropriated nearly $5 billion, and HHS reported that about 1.8 million children received subsidies in an average month. As a condition of receiving CCDF funds, states must conduct biennial surveys of child care providers, which are considered by states when establishing reimbursement rates for providers serving subsidized children. In addition to information regarding the fees charged by providers for child care services, such surveys may provide states with information about the type of child care they provide, qualifications of the staff, the age groups of the children they serve, and where they are located. The four states we visited varied in the design features and funding of their prekindergarten programs. Programs shared similar features such as voluntary enrollment of children at no direct cost to their parents, but differed in others. In addition, all five state programs permitted collaboration with community-based providers. States varied in features such as the teacher requirements for their prekindergarten programs. States and school districts also differed in the degree and type of collaborations they established with community-based agencies. Finally, while states relied primarily on state resources, they reported some differences in funding mechanisms and per child funding levels. In the four states visited—Georgia, New Jersey, New York, and Oklahoma—we found some similarities in prekindergarten programs. Over the last few years, all four states had expanded their state-sponsored prekindergarten programs and, as reported by NIEER in February 2004, were among only nine states and Washington, D.C., that provided prekindergarten services to more than 20 percent of their four-year-olds. All four states’ prekindergarten programs were provided at no direct cost to parents—regardless of family income—and were offered on a voluntary basis; children’s enrollment was not mandatory. In addition, each program emphasized preparation for school and incorporated the delivery of prekindergarten services by community-based organizations as well as schools. None of the states required that all providers offer transportation services, although some providers did, and one state offered reimbursement for some children when this occurred. Figure 2 shows the estimated number of age-eligible children in the state, and the number of age-eligible children participating in prekindergarten and Head Start programs in the four states we visited. The states we visited differed in the extent of geographic coverage and participation in prekindergarten programs. Three of the four states— Georgia, New York, and Oklahoma—aimed to provide prekindergarten programs to all four-year-olds in the state whose parents wanted them to attend. While none of these states provided prekindergarten to all four- year-olds, Oklahoma and Georgia had the most widespread programs. During the 2003-04 school year, Oklahoma provided prekindergarten in 509 of its 541 school districts to about 63 percent of its four-year-olds; Georgia provided prekindergarten in all of its 181 school districts and to about 55 percent of its four-year-olds. New York initially implemented much of its universal prekindergarten program in school districts located in the five largest cities in the state—Buffalo, New York City, Rochester, Syracuse, and Yonkers. During the 2003-04 school year, about 80 percent of the participating children attended prekindergarten in one of these five cities. Overall, New York’s prekindergarten program was offered in 190 of its 680 school districts. In New Jersey’s Abbott program, the state was court ordered to provide prekindergarten to all three- and four-year-olds who resided in the state’s 30 highest-poverty school districts. In addition, 102 non-Abbott early childhood program aid (ECPA) school districts in high-poverty areas received funds for prekindergarten programs. Combined, these two programs provided prekindergarten in 132 of 539 (24 percent) school districts in New Jersey. Table 2 provides information on the extent of geographic coverage and percentage of age-eligible children participating in prekindergarten programs in the four states we visited. The state-sponsored prekindergarten programs also differed in some of their key design features. For example, the length of the program, in terms of hours per day, ranged from 2.5 hours to 6.5 hours per day among the four states. Full-day prekindergarten was provided in Georgia and New Jersey’s Abbott prekindergarten programs. The other three prekindergarten programs—New Jersey’s non-Abbott ECPA, New York, and Oklahoma—allowed school districts to determine whether to offer full-day or half-day programs. The states also varied in their requirements for lead teachers, and two of the five state programs (New Jersey’s Abbott and Oklahoma) required teachers to be certified in early childhood education. In New Jersey’s non-Abbott ECPA program, prekindergarten teachers could also hold certification in elementary education. Beginning with the 2004-05 school year, New York’s state program required that all prekindergarten teachers be certified, but certification could be in an area other than early childhood education. In Georgia, lead teachers were required to hold at least a technical diploma or degree, associate’s degree, or Montessori diploma. However, most lead teachers had at least a four-year-college degree. As of May 2004, the Georgia Department of Early Care and Learning reported that about 58 percent of its prekindergarten teachers were certified in early childhood or elementary education and 21 percent held four-year education-related or other degrees with some additional training in early childhood education or development. Combined, about 79 percent of the lead teachers in Georgia had at least a four-year education- related college degree. States and school districts established collaborations with community- based organizations differently and often relied on them extensively to provide prekindergarten services to children. For example, Georgia had a centralized program and the state’s Department of Early Care and Learning was directly responsible for establishing collaborations with community-based organizations such as child care centers and U.S. military bases. In contrast, in the other three states we visited, local school districts had responsibility for establishing collaborations. In New York, the state required that school districts use at least 10 percent of their universal prekindergarten grant funds to serve children in community- based organizations, but statewide over 60 percent of four-year-olds were participating in community-based prekindergarten programs during the 2002-03 school year. The extent of collaborations varied between the two prekindergarten programs in New Jersey during the 2003-04 school year. In the Abbott school districts, the state was ordered by the New Jersey Supreme Court to provide full-day prekindergarten for all three- and four- year-olds; over 70 percent of these children were served by community- based providers. In contrast, in New Jersey’s non-Abbott ECPA school districts, only about 11 percent of the children received prekindergarten from community-based providers. While state officials in Oklahoma were supportive of collaborations, local school district officials determined the role of community-based providers in their prekindergarten programs. In Oklahoma, most children were enrolled in prekindergarten programs in public school buildings; the state did not know how many local school districts collaborated with community-based organizations or the number of children participating in them. These and other key differences in the design and implementation of state prekindergarten programs are identified in table 3. All four states relied primarily on state resources but differed in other aspects of funding such as amounts per child, funding methods, and the extent to which these methods and amounts provided for financially stable programs. During the 2002-03 school year, enrollments and state spending for prekindergarten services varied widely among the five state programs. Based on data we collected from the states, spending ranged from approximately $347 million for prekindergarten services in the 30 Abbott school districts in New Jersey to about $30 million for the 102 non-Abbott ECPA school districts in New Jersey. Table 4 provides information on the primary methods of program funding, estimated number of participating children, and estimated state spending among the five programs during the 2002-03 school year. Per child expenditures for full-day and half-day prekindergarten varied across the four states we visited and were consistently less than the state’s per pupil expenditures for kindergarten through grade 12. New Jersey’s Abbott districts had the highest funding per child for full-day prekindergarten, relative to kindergarten through grade 12 funding. In the remaining states, funding for full-day prekindergarten was much less than the level of funding per child in kindergarten through grade 12. See figure 3 for comparisons of per child expenditures for prekindergarten and kindergarten through grade 12 in the four states we visited. Apart from New Jersey’s Abbott and Georgia’s prekindergarten programs, the other state programs we examined were largely half-day. New Jersey’s non-Abbott ECPA program, New York, and Oklahoma permitted local school districts to operate half-day prekindergarten. However, these states differed in how they funded their half- and full-day programs. School districts in New Jersey’s non-Abbott ECPA program and New York received the same amount per child whether they operated half-day or full- day programs, and about 80 percent of the children attended half-day prekindergarten in each of the two programs. In Oklahoma, local school districts received about $1,743 per child (54 percent of the full-day rate), and over half of the four-year-olds participating in the state’s prekindergarten program were enrolled in half-day programs during the 2002-03 school year. The four states varied in how they funded their state-sponsored prekindergarten programs, and officials in two states told us that the financial outlook of their programs was stable. According to two New Jersey state officials, because of the state supreme court decision and subsequent court order, New Jersey was committed to providing a quality prekindergarten program to all three- and four-year-olds who lived in the Abbott school districts. In addition, funding for both New Jersey’s Abbott and non-Abbott ECPA prekindergarten programs was part of the school funding formula. Oklahoma supported prekindergarten through the funding formula, as it did for other school grades, and state officials told us they believed that funding for the program was stable. However, funding for prekindergarten in the other two states may be more uncertain. For example, funding levels for New York’s state-sponsored prekindergarten had increased somewhat for the past 3 years but were insufficient to allow the state to implement a universal prekindergarten program available to all four-year-olds by the 2001-02 school year as planned. During the same period, New York financed its program from general revenue funds as a line item in the budget, and in 2003, the program was targeted for elimination because of state fiscal shortfalls. While avoiding elimination, limited increases in funding have restricted the state’s ability to expand the program over the past several years. Most eligible districts participated in the program. However, about two-thirds of the school districts were not eligible for the state-sponsored prekindergarten program during the 2002-03 school year. Georgia has historically relied on the state lottery to fund its prekindergarten program. When the lottery was initially created, its proceeds were set aside for three programs, including state-sponsored prekindergarten. Currently, lottery funds are used to support prekindergarten and a program to provide academic scholarships for eligible high school graduates. However, over time, a greater percentage of the lottery funds has been designated for the college scholarship program than for prekindergarten. Additionally, lotteries recently began in two neighboring states, and officials we interviewed were concerned that Georgia’s lottery proceeds may level off. State officials told us that lottery funds may be insufficient to entirely support the prekindergarten program by 2007, and the state has begun to look at stop gap measures to protect lottery funding if needed in the future. The four states reported using small amounts of federal funds to support their prekindergarten programs; these amounts were generally small relative to state funding levels. For example, two states—Georgia and Oklahoma—used some prekindergarten monies to meet their CCDF matching or maintenance-of-effort requirements. In fiscal year 2002, Georgia used about $2.4 million in lottery funds for CCDF state matching and maintenance-of-effort, which represented about 1 percent of the state funding for prekindergarten. These funds were used for extended day (before- and after-school) for Temporary Assistance for Needy Families (TANF) eligible children participating in prekindergarten. Oklahoma used about $2.1 million of its prekindergarten funds to meet CCDF maintenance-of-effort requirements, which represented about 3 percent of the state funding for prekindergarten. In fiscal year 2002, New York transferred $61.3 million from the TANF program to the state prekindergarten program, but this was done for only one year. None of the other states used TANF funds to support the expansion of their prekindergarten programs. While state officials told us that Title I, Individuals with Disabilities Education Act, and Head Start program funds were also used at the local level to support prekindergarten, they did not know the exact amounts from these other federal sources. Some prekindergarten design features had implications for children’s participation and early childhood programs in the four states we visited. For example, both local officials and providers told us that transportation and program hours may have affected access to prekindergarten programs for children of low-income and working families. State and local officials, along with community-based providers and Head Start grantees, told us that collaborations were beneficial to their programs and had allowed rapid expansion of state prekindergarten. However, some challenges remained, such as the efforts needed to establish and maintain effective collaborations. Finally, few empirical data were available to quantify the effect of expanding state prekindergarten programs on the availability and prices for child care, and the anecdotal evidence we collected was mixed. Program features, which varied across states and school districts, may have affected participation, particularly for children of low-income and working families. None of the four states required prekindergarten providers to transport all participating children. Officials in some school districts told us that the lack of transportation may have decreased the participation of children from low-income and working families, and 10 of the 12 school districts we visited did not provide transportation to and from prekindergarten for all participating children. Some school district officials primarily cited insufficient funding as a reason for not providing transportation services. For example, in one Oklahoma school district, children did not necessarily attend prekindergarten classes at their neighborhood school; consequently, the district would have incurred additional costs to transport children to their designated school. One official in a rural school district we visited in New York told us that more children from low-income and working families would have participated in prekindergarten if transportation were provided. However, costs prohibited the district from offering such services. However, officials in the urban school district we visited in New York did not view the lack of transportation as a barrier to participation, as the prekindergarten programs were generally available in proximity to children’s homes or parents’ jobs. In Georgia, the Department of Early Care and Learning offered additional funding to providers who opted to transport eligible children to and from the prekindergarten program. In May 2004, the Georgia Department of Early Care and Learning paid for the transportation of 13,152 children. In New Jersey, the Abbott school districts were required to provide transportation when needed. In Oklahoma, where the majority of the participating children attended half-day prekindergarten programs, school district officials told us that the length of the school day affected participation. In all three school districts we visited, local officials told us that shortened program hours may have hindered the participation of children of low-income and working families. Officials from one Oklahoma school district told us that the combination of a half-day program coupled with the lack of transportation to and from the prekindergarten program reduced the participation of children from low-income and working families. In that district, approximately 45 percent of the district’s elementary school population was eligible for free and reduced price lunch during the 2003-04 school year, but only 29 percent of the children enrolled in prekindergarten were eligible for free and reduced price lunch, indicating lower participation of children from low-income families. Similarly, in another Oklahoma school district, while about 84 percent of the district’s student population was eligible for free and reduced price lunch, 60 percent of children who participated in prekindergarten were eligible for free and reduced price lunch. However, officials in two Oklahoma school districts told us that certain factors discouraged them from offering full-day programs; for example, they were able to serve twice as many children with half-day programs, rather than full-day programs, using the same resources (classrooms and teachers). Additionally, one of these officials told us that it would be difficult to implement full-day prekindergarten while the school district only offered a half-day kindergarten program. Finally, some school district officials told us that the location of the program could also affect the participation of children of working families. In particular, half-day programs without transportation could be more appealing to low-income and working families if they were offered in a child care center where the child could receive care for the duration of the work day. One urban school district we visited primarily offered half-day prekindergarten classes. However, the school district officials told us that the prekindergarten classes were sometimes offered in conjunction with other programs at the same location. A district official and child care providers in this school district told us that this arrangement met the needs of low-income and working families as children would receive a full day of care. Officials and child care providers in this school district told us that in order to offer a full day of care, some child care providers supplemented the state-sponsored prekindergarten program with other monies, including Head Start funding, CCDF subsidies, and parent payments. Although state and local officials, as well as staff of community-based organizations, told us that collaborations were beneficial, some challenges remained. Officials in all four states we visited told us that such collaborations allowed them to serve more children, and three of the five programs served most prekindergarten children in community-based organizations such as Head Start and child care centers. In Georgia, New Jersey, and New York, officials reported that they made extensive use of collaborations because they wanted to implement the prekindergarten programs quickly and schools were often at capacity. In New Jersey, the state supreme court ordered the state to provide full-day kindergarten as well as full-day preschool for three- and four-year-olds in the 30 Abbott school districts. To implement the court order, the districts turned to community-based organizations to accommodate the influx of children. In New York, school districts were only required to use at least 10 percent of their universal prekindergarten grant funds to serve children in community-based organizations. Two school districts we visited in New York served the majority of participating children in community-based organizations (66 and 100 percent). However, in Oklahoma, where school districts had been experiencing declining enrollment, collaborations with community-based organizations were less prevalent, as districts were able to accommodate prekindergarten children in public schools. Officials from most of the school districts we visited that did use collaborations told us that the collaborations allowed them to take advantage of the existing early child care and education infrastructure, such as buildings, equipment, and assistant teachers to increase program capacity and reduce program costs. Child care providers who partnered with state prekindergarten programs generally had favorable experiences with collaborations as well. Specifically, providers mentioned increased enrollment, improved quality of programs, and increased access to school district resources as benefits of their partnerships. Some providers who collaborated with the state prekindergarten programs in Georgia, New Jersey, and New York told us that they expanded their centers to serve more children of all ages—as some parents enrolled the younger siblings of their prekindergarten children in the same child care center. In addition, child care providers in all four states told us that the overall quality of care had improved as a result of collaborating with the state prekindergarten program. Some providers attributed the improved quality to various factors, including a greater focus on learning, the presence of credentialed teachers, and higher standards for the children. Finally, some child care providers who collaborated with state-sponsored prekindergarten programs told us that the partnership allowed them increased access to school district resources such as professional development and materials such as new computers for the classrooms. Head Start grantees also told us that collaborations were beneficial to their programs. For example, in Georgia and New York, some Head Start grantees who provided prekindergarten services stated they were better able to serve children by leveraging state prekindergarten and Head Start funds. These grantees told us they were able to expand program hours and enrich the learning environment while still providing Head Start’s services, including establishing family partnerships. In Georgia, Head Start grantees served 3,654 children, or just over 5 percent of the children enrolled in the program during the 2004 fiscal year. In New York, Head Start provided about 345 classrooms of prekindergarten—representing about 9 percent of the total number of prekindergarten classrooms. Two Head Start grantees told us that since the state prekindergarten program served four-year-olds they began serving more three-year-olds. As a result, children from low-income families could participate in 2 full years of preschool. While collaborations generally benefited early childhood programs, some challenges existed in establishing and maintaining partnerships between state-sponsored prekindergarten programs and community-based organizations. State and school district officials told us that establishing and maintaining collaborations took effort, required expertise, and involved increased monitoring and technical assistance, including financial guidance. For example, while one school district we visited had a staff person responsible for establishing and maintaining collaborations, another school district did not have such an expert and was unsure about how to develop partnerships or arrange the formal contracts needed to collaborate. Child care providers also mentioned certain challenges such as insufficient or uncertain funding. For example, in New York and Georgia, where per child funding to community-based organizations had remained fairly level for at least the past 3 years, some child care providers told us that the per child funding was insufficient and they had to use other funding sources to support the collaboration. In addition, we found that challenges existed in establishing the collaborations with Head Start in all four states. In both Oklahoma and New York, there was no formal mechanism, such as a statewide contract to facilitate collaboration between school districts and Head Start grantees, and the two programs sometimes coexisted in the same community without the benefit of shared resources. Two of the three school districts we visited in Oklahoma did not collaborate with Head Start; the third served about 9 percent of its prekindergarten children through collaborations with Head Start grantees. Challenges in establishing collaborations with Head Start also remained in New Jersey. In 2003, New Jersey’s Department of Education and Department of Human Services developed plans for including Head Start grantees as partners in providing prekindergarten in the Abbott districts over the following 3 years. However, many challenges remained to achieving this goal including agreement on appropriate per child funding levels, as well as challenges in aligning curricula and other coordination issues. While some community-based providers were initially apprehensive about the potential impact of the widespread availability of states’ prekindergarten programs on the market for child care, we found few data to support this concern. In the states we visited, neither state officials nor the child care provider community had data regarding the effects of expanded prekindergarten programs on the availability and prices of child care. The available data were limited to child care market rate surveys, which were conducted by states to obtain information needed to set reimbursement rates for child care, and data collected every 5 years by the Census of Service Industries on the number of tax filings by child day care providers. Market rate surveys provided relatively recent data on prices, but generally did not include sufficient data to isolate any effects of prekindergarten and were not always collected in a comparable or reliable form before and after prekindergarten expansion. In contrast, the state level data currently available from the Census of Service Industries were collected in a consistent fashion over time and across states, but were available only for the period through 1997, just 2 years after significant growth in the prekindergarten program in Georgia, the oldest expanded program of the five we studied. The data indicate that the number of small child care providers per 1,000 preschoolers in Georgia and the nation as a whole followed similar growth trends from 1987 to 1997, the years of available data. Further, in the same time period, the number of child care centers per 1,000 preschoolers and the number of employees paid by these centers increased in both Georgia and the nation (where prekindergarten services were generally less available than in Georgia). However, this does not prove that the expansion of prekindergarten programs had no effect on the number of child care providers. For example, perhaps the number of providers would have increased even more had prekindergarten programs not been expanded. The anecdotal evidence regarding the effects of prekindergarten programs on the market for child care was mixed. For example, representatives of the child care community mentioned some positive effects on the market for child care, including the increased availability and accessibility of high- quality child care and early education for children from low-income families. However, some child care providers in Georgia, New Jersey, and Oklahoma told us that state programs had adverse effects on the business of child care, but they were unable to provide us with supporting documentation. According to child care providers, the care of three- and four-year-olds was less costly than the care of infants and toddlers, and the revenue generated from caring for the older children subsidized the care of younger children and made up a significant portion of their revenues. Child care providers also said that the enrollment of four-year-olds in state prekindergarten programs could result in child care centers raising prices to compensate for the loss of such revenues or even going out of business. In addition, some child care providers in Georgia told us that because state program funding did not cover the costs of operating prekindergarten, some centers had raised the rates for other services such as extended day care. However, any potential effects of prekindergarten on the price and availability of child care may have been mitigated by certain design aspects of the programs in the states we visited. For example, while the majority of prekindergarten children in Oklahoma were served in public school settings, the potential effects on the child care market may have been mitigated because most children were in half-day prekindergarten programs and some needed child care before and after the program. In Oklahoma, the state Department of Human Services also provided a full- day reimbursement for CCDF-eligible children who used child care for more than 4 hours a day. As a result, the state’s half-day program appeared to have minimal impact on child care providers. In New Jersey’s Abbott districts and Georgia, which had full-day prekindergarten, the classes were often situated in community-based organizations. Consequently, many four-year-olds who attended these programs remained in community- based settings, and child care providers maintained their enrollment of four-year-olds. Furthermore, in the Abbott districts, the New Jersey Department of Human Services provided additional funds to cover 4 hours of child care, beyond the 6 hour educational program. As a result, some child care centers were reimbursed for providing services for up to 10 hours per day. Some data have been collected on outcomes for participating children, but little is known about outcomes for their families. In all the school districts we visited, prekindergarten teachers routinely assessed children and provided parents with information about their child’s progress during the school year. However, the states did not collect and analyze these assessment data. We found two studies that provided information about the educational outcomes of state prekindergarten programs on children in Oklahoma and Georgia. One study focused on the Tulsa School District and found that children who participated in the Tulsa prekindergarten program had significantly higher scores on several school readiness measures than children who did not participate in the program. A second study analyzed statewide data regarding Georgia’s prekindergarten program and reported that children who participated in one of three programs studied (Georgia’s prekindergarten program, Head Start, and private preschools) generally made significant gains on developmental skills during the prekindergarten year. None of the four states we visited reported collecting information regarding the impact of their programs on families. State officials told us that prekindergarten programs increased choices for families, but none reported knowing whether the prekindergarten program had any effect on parents’ work efforts. In general, teachers assessed the progress of the children’s development in the course of teaching prekindergarten using developmentally appropriate assessments. We found that the types of assessments varied across prekindergarten providers, and some providers used multiple types of assessments. For example, assessments included checklists that rated the child’s progress on various developmental objectives, observational records, as well as portfolio assessments, which consisted of a collection of the child’s work and projects that showed the child’s progress throughout the school year. In general, such assessments were used to inform the teacher and provide information to parents during the school year. None of the states required a particular assessment of children’s outcomes. State officials acknowledged the importance of collecting and analyzing student outcome data. However, such analysis had not been systematically conducted on a statewide basis in any of the four states we visited. The outcome data that the teachers had were not necessarily in a form conducive to collection and analysis by the states. Officials offered various reasons for not collecting outcome data. In New York, officials told us that there was no funding for large-scale data collection efforts and they were awaiting the results of this year’s fourth grade state test to analyze the potential long-term effects of their prekindergarten program, since some of these fourth graders had participated in the prekindergarten program as four-year-olds. In New Jersey, state officials told us that they planned to perform a program evaluation including children’s outcomes after the program had matured. In Oklahoma, state officials told us that they did not collect outcome data for all children in the state, but limited information regarding program outcomes was available in two school districts. For example, we found that one school district we visited in Oklahoma had collected and analyzed data on the outcomes of 22 children over a 1-year period, 11 of whom participated in the district’s prekindergarten program. Two recent studies provided some information on outcomes for children in two state prekindergarten programs. A study conducted by researchers at Georgetown University analyzed the short-term effects of prekindergarten on children in the Tulsa public schools and found positive effects of the Tulsa program. In particular, the study found that children who participated in the Tulsa prekindergarten program had higher scores on both cognitive knowledge and language measures, and on measures of motor skills, than did similar children who did not participate. Additionally, the Tulsa study found that impacts tended to be larger for African American and Hispanic children, and that there was little impact for white children, although the authors discussed certain ceiling effects that may have made it difficult to detect any impacts for white children as a whole. The study also found that children who qualified for the full free lunch program showed greater benefits than the population as a whole, and benefits were larger for children from low-income families who participated in full-day programs than those participating in half-day programs. The second study, sponsored in part by the Georgia Department of Early Care and Learning and conducted by Georgia State University, also found progress among children who participated in the Georgia prekindergarten program during the 2001-02 school year. The study compared the progress of children in three early education settings: private preschool, Georgia’s state prekindergarten program, and Head Start. Children participating in the three programs performed at different levels upon entering the programs. The study reported that at the beginning of prekindergarten, children enrolled in Head Start demonstrated less mastery of certain skills than did children in the Georgia prekindergarten program, who, in turn, scored lower than children in private preschools. The study found that children in all three programs made significant gains over the course of 1 year, though in general, the gains made by the prekindergarten children were not significantly different from the gains made by the other two groups of children. By the end of prekindergarten, or at the beginning of the kindergarten year, the relative rankings of the children from the different programs had not changed. The researchers also matched children with similar backgrounds to compare the effectiveness of Head Start and Georgia’s state prekindergarten program. When comparing the language, communication, problem-solving, and basic mastery skill scores for the matched samples of children, researchers noted one case—basic skill mastery—in which the gap between scores of the state prekindergarten children and the Head Start children widened to a statistically significant level at the end of the prekindergarten year. Citing benefits of prekindergarten, many states have made an investment in the early education of young children, especially four-year-olds. Georgia, New Jersey, New York, and Oklahoma have taken steps to expand early educational opportunities for preschoolers. These four states offer different approaches for consideration by other states that are considering whether to expand the scope of their prekindergarten programs. Given that states have limited resources, an opportunity exists to engage community-based providers such as Head Start grantees and other early education and care providers in the coordinated delivery of additional prekindergarten services. Collaborations between school districts and community-based organizations facilitate the coordination of child care and early learning for preschoolers and can provide additional classroom capacity. At the same time, these partnerships can help allay fears among child care providers that prekindergarten programs would supplant the need for community-based services. It is also important to acknowledge the trade-offs of certain program features. For example, while programs with limited hours may accommodate a higher number of children within the same facility and may be less likely to affect existing child care providers, they may create barriers to participation for children of working families and show smaller effects in school readiness of children. Also, prekindergarten programs may benefit by collaborating with existing programs to maximize the efficient use of limited state and federal education resources. However, such arrangements may make it necessary for states and school districts to invest resources to facilitate such coordination. Perhaps the biggest trade-off that states face is whether the benefits of an expanded prekindergarten program outweigh those of one that is more targeted. Targeted programs have the advantage of giving more intensive services to eligible children who may benefit most from prekindergarten, but such programs exclude some children who might also benefit. Additional information on outcomes for children in the most intensive programs, particularly relative to children who do not receive comparable services, may be helpful to other states considering varied types of prekindergarten services, as would data on the benefits of half-day programs relative to full-day programs. The Departments of Health and Human Services and Education were provided a draft of this report for review and comment. The Department of Health and Human Services commented that given the discussions surrounding the “state option” proposals being held during Head Start reauthorization, our report is informative. Education’s Executive Secretariat stated that the department appreciated the opportunity to review the draft but was not going to provide agency comments. Both agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Secretary of Education, relevant congressional committees, and other interested parties. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Other GAO contacts and staff acknowledgments are listed in appendix III. In conducting our review, we obtained and analyzed information from the federal Departments of Education and Health and Human Services, state agencies, and local education agencies (LEA). We visited four states, and in each state we interviewed staff from state agencies and LEAs, and generally included one urban, one suburban, and one rural school district in each state—for a total of 12 school districts. We also interviewed early childhood education and child care policy experts and reviewed selected current literature on state-sponsored prekindergarten programs. For our fieldwork, we considered states that (1) had expanded programs and aimed to serve all children whose families wanted them to attend, (2) served large numbers of children in their prekindergarten programs, and (3) had well-established programs. We attempted to include varied program models and gave some priority to states that had studied their efforts. To determine how states designed their prekindergarten programs, we interviewed state and local education officials and officials of 13 community-based organizations who were direct providers of state prekindergarten services. We also reviewed documents related to states’ prekindergarten programs, including state laws, general program information, state data on program participation and costs, curriculum guides, content standards, and contracts governing collaborations with community-based organizations. We obtained information to describe the state-sponsored prekindergarten programs and reviewed the data for reasonableness. We assessed reliability of specific information, including estimates of age-eligible children and program expenditures, by interviewing state officials in all four states about their data reliability assessment processes. On the basis of this information, we concluded that the data were sufficiently reliable for the purposes of this report. In addition, we performed a detailed review of the methodology of a National Institute for Early Education Research report and found it to be sufficient for descriptive purposes. The data that were used for background purposes were not independently verified. To determine the potential implications of prekindergarten on other programs that serve four-year-olds, we interviewed state child care administrators, state and local Head Start association directors, coordinators and program staff, as well as staff of local child care centers. We also met with national and state representatives of the National Child Care Association (an association of managers and owners of child care centers) and related organizations, as well as state and local staff of child care resource and referral offices. We also reviewed selected national and state data on child care availability and prices. Finally, we interviewed state officials regarding federal funds, including Title I of the Elementary and Secondary Education Act, Temporary Assistance for Needy Families, Individuals with Disabilities Education Act, and the Child Care and Development Block Grant. We did not review the expenses of providers of state prekindergarten services to ascertain the extent, if any, to which federal funds or other revenues may have subsidized provision of prekindergarten. To determine what is known about the impacts of prekindergarten programs on children and families in the states visited, we interviewed state and local education officials and local policy experts. We also identified two studies on children’s outcomes that met our selection criteria: studies that (1) analyzed student achievement and (2) compared prekindergarten children with a control or comparison group of children who did not attend the state-sponsored prekindergarten programs. To collect information systematically, we developed a data collection instrument and examined each study to assess the adequacy of the samples and measures employed, the reasonableness and rigor of the statistical techniques used to analyze them, and the validity of the results and conclusions that were drawn from the analyses. A social scientist read and coded the documentation for each study. A second social scientist reviewed each completed data collection instrument and the relevant documentation to verify the accuracy of every coded item. We found these two studies to be sufficiently reliable and rigorous to include in our report. We did not identify any studies that assessed the impact of these prekindergarten programs on working families such as workforce participation. We conducted our work between October 2003 and August 2004 in accordance with generally accepted government auditing standards. The following people also made important contributions to this report: Nagla’a El-Hodiri, Shana Wallace, Alison Martin, Jean McSween, Barbara Hills, Susan Bernstein, Amy Buck, and Daniel Schwimer. Head Start: Better Data and Processes Needed to Monitor Underenrollment. GAO-04-17. Washington, D.C.: December 4, 2003. Child Care: States Exercise Flexibility in Setting Reimbursement Rates and Providing Access for Low-Income Children. GAO-02-894. Washington, D.C.: September 18, 2003. Early Childhood Programs: The Use of Impact Evaluations to Assess Program Effects. GAO-01-542. Washington, D.C.: April 16, 2001. Title I Preschool Education: More Children Served, but Gauging Effect on School Readiness Difficult. GAO-00-171. Washington, D.C.: September 20, 2000. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Preschool Education: Federal Investment for Low-Income Children Significant but Effectiveness Unclear. GAO/T-HEHS-00-83. Washington, D.C.: April 11, 2000. Early Childhood Programs: Characteristics Affect the Availability of School Readiness Information. GAO/HEHS-00-38. Washington, D.C.: February 28, 2000. Education and Care: Early Childhood Programs and Services for Low- Income Families. GAO/HEHS-00-11. Washington, D.C.: November 15, 1999.
For nearly 40 years, the federal government has played a role in providing early childhood development programs for children of low-income families through Head Start and other programs. Since 1980, the number of states with preschool programs has also significantly increased. While most of these programs have targeted children at risk of school failure, more recently, interest has grown in expanding these limited programs because of the growing concern about children's readiness for school and subsequent achievement. It has also been fueled by new research on early brain development that suggests the importance of early education and by the high rate of mothers in the workforce and their need for early childhood services. In this context, questions have arisen about how the various programs are coordinated and what lessons have been learned from broad-based state preschool efforts. This work focused on four states that have expanded their preschool programs to serve more children. In these states, GAO addressed (1) how prekindergarten programs were designed and funded, (2) the potential implications of these program features for children's participation and other programs that serve four year-olds, and (3) the outcome data that have been collected on participating children and families. To gather this information, GAO conducted site visits in four states--Georgia, New Jersey, New York, and Oklahoma. The expanded prekindergarten programs in Georgia, Oklahoma, New York, and New Jersey had some similarities in their design features. For instance, programs were offered at no direct cost to parents, regardless of family income, and each state incorporated some level of collaboration with community-based providers such as Head Start and large child care facilities. Some key differences in their design features also existed. For example, Georgia and Oklahoma had statewide programs providing prekindergarten services to over half of their four-year olds, while New York's and New Jersey's programs were more geographically targeted. States and school districts also varied in offering full- or half-day prekindergarten programs. States also varied in teacher qualifications, the percentage of prekindergarten children served by community-based providers, funding methods, and in the amount of funding per child. Some program features had potential implications for the participation of children and for early childhood programs. For example, none of the four states required providers to transport all children to and from prekindergarten, and many children were enrolled in half-day programs, which officials believed might have limited the participation of children from low-income and working families. Collaborations between programs and community-based organizations generally permitted rapid program expansion and were viewed as beneficial to early childhood programs. Finally, we found few data to determine the impact of state prekindergarten expansion on the availability or prices of child care. While some data were available on outcomes for children who participated in prekindergarten programs, less was known about their impacts on families. For example, a study in Oklahoma showed that children who participated made significant gains on several school readiness measures relative to a comparison group of unenrolled children. However, none of the four states had measured effects on families, such as parents' work effort.
In 2011, an estimated 6.2 million children were referred to child welfare agencies by sources including educators, law enforcement officials, and relatives because they were allegedly maltreated. After an initial screening process, agencies conducted abuse or neglect investigations and assessments on behalf of more than half of these children. Over 675,000 children were found to be the victims of abuse or neglect. Many of the children (both victims and non-victims) who were referred to child welfare agencies, as well as their caregivers and families, received some child welfare services, such as in-home services and counseling or other mental health services. Child welfare agencies also conduct activities referred to in the report as non-service related. These non-service-related activities include investigating allegations of abuse or neglect (known as child protective investigations), providing case management for children at home or in foster care, training staff, and administering programs. Further, child welfare agencies make payments to caregivers of children in foster care (maintenance payments) and to adoptive parents of former foster children and other eligible children with special needs (adoption subsidies). Children referred to child welfare agencies as well as their families may need a variety of services. Families may need services to prevent child abuse or neglect, or to help stabilize the family if abuse or neglect has occurred so that the child can safely remain at home. If it is not in a child’s best interest to remain at home, the child may be placed in foster care. In these cases, services may be offered to help the family reunite. If reunification is not possible, services may be needed to encourage adoption and support adoptive families. Some common types of child welfare services are listed in table 1, below. Child welfare agencies secure services in a variety of ways. Child welfare agency staff may provide some services directly in addition to carrying out typical case management duties. Child welfare agencies may also rely on contractors, also called purchased service providers. Another way child welfare agencies secure services is by relying on partner agencies, such as behavioral health agencies and public housing authorities. These agencies serve families in the child welfare system in addition to clients who are not in the child welfare system. Child welfare agencies also refer individuals for medical services. Medical services may be supported in a variety of ways, including through Medicaid or private health insurance. Figure 1, below, is an example of how a child welfare agency might meet—using various providers and funding sources—a hypothetical family’s diverse service needs. States are chiefly responsible for funding and administering child welfare programs. Most states administer their child welfare programs centrally. However, in some states, local agencies administer their own child welfare programs, with supervision from the state. To varying degrees, these agencies use a combination of state, local, and federal funds to support their programs. According to a survey of states funded by the Annie E. Casey Foundation and Casey Family Programs, in state fiscal year 2010, 46 percent of all child welfare expenditures were from federal sources, while 43 percent and 11 percent were from state and local funds, respectively. Among federal funds used for child welfare purposes, states use a combination of funding designated solely for child welfare purposes and other sources of funding with broader aims. Title IV-B is the primary source of federal child welfare funding available for child welfare services, representing about 9 percent of dedicated federal child welfare appropriations ($730 million of $8 billion) in fiscal year 2012. In addition to child welfare services, Title IV-B funding may also be used for a variety of other activities, such as child protective investigations and case management. Child welfare agencies may spend Title IV-B funds on behalf of any child or family. They receive these funds primarily though two formula grant programs: the Stephanie Tubbs Jones Child Welfare Services program (CWS) under Subpart I of Title IV-B, and the Promoting Safe and Stable Families child and family services program (PSSF) under Subpart II. About $281 million in CWS funds and $328 million in PSSF funds were provided to states, territories, and tribes in fiscal year 2012.streams are similar, as seen in table 2, below, although CWS funds may be used for a broader array of activities. States may spend CWS funds on any service or activity that meets the program’s broad goals, which include protecting and promoting the welfare of all children. Ninety percent of PSSF funds must be spent within four required categories: The purposes of Title IV-B’s two main funding family support, family preservation, time-limited family reunification, and adoption promotion and support. Funds authorized under Title IV-E of the Social Security Act make up the large majority of federal funding dedicated to child welfare, with funds chiefly available for specific foster care and adoption expenses, but not for services. Congress appropriated $7.1 billion under Title IV-E in fiscal year 2012 (89 percent of federal child welfare appropriations), in general to partially reimburse states for expenditures on behalf of eligible children and youth who are in foster care, have left care for adoption or guardianship, or are aging out of care without adoptive homes. Title IV-E funds may be used to reimburse states for a portion of room and board (maintenance) expenses for eligible children in foster care, and for the costs of subsidies to parents who adopt eligible children with special needs (adoption assistance). States participating in the Guardianship Assistance Program may also receive Title IV-E reimbursement for a portion of assistance payments provided to relatives who become guardians (known as kinship guardians) of eligible children in foster care. States may also use Title IV- E funds to support case planning for eligible children in foster care, and for administration and training costs associated with eligible foster children and children adopted out of foster care. Additionally, states may use Title IV-E funds available through the Chafee Foster Care Independence Program and Education and Training Vouchers to support youth who are transitioning out of foster care without a permanent home, youth who have been adopted out of foster care after age 16, and youth who have entered into kinship guardianships after age 16. The funds provided under Title IV-E serve as an open-ended entitlement to support the costs of caring for eligible children in foster care. However, there is no similar entitlement to preventive services for children at risk of entering into foster care. Experts and policymakers have expressed concerns that the federal funding structure for child welfare encourages reliance on foster care and does not grant states flexibility to support services designed to reduce the need for foster care. However, Congress authorized HHS to waive certain Title IV-E funding restrictions so that states with approved demonstration projects may spend those funds more flexibly. In order to be granted a waiver, states must demonstrate that their projects are cost-neutral to the federal government, among other requirements. States must also conduct an evaluation (carried out by an independent contractor) of project success in improving child and family outcomes. HHS’ authority to issue these waivers lapsed in 2006 but was renewed by Congress in 2011. Congress also appropriated $189 million in fiscal year 2012 (2 percent of federal child welfare appropriations) under the Child Abuse Prevention and Treatment Act (CAPTA) and a variety of other programs and initiatives, much of which was not directed explicitly to child welfare agencies and could be available to partner agencies and community- based organizations as well. These programs and initiatives included competitive grants for purposes including eliminating barriers to adoption and providing services to abandoned children. Officials from the four states we studied reported spending Title IV-B CWS funds in state fiscal year 2011 to support a variety of services and other activities, and they told us they largely spent PSSF funds for services in the program’s four required expenditure categories. With respect to CWS, Virginia used these funds for case management costs for children in foster care who were not eligible for Title IV-E funding. Florida allocated over two thirds of CWS funds for case management costs for children living at home, out of the home, or with adoptive families. Florida spent almost one third of CWS funds on children’s legal services, and limited funds on administration and training. Minnesota officials reported spending CWS funds on licensing staff, other state level expenses, quality assurance, and program administration. New Mexico largely spent CWS funds on foster care maintenance payments, which is permitted in limited circumstances. States report annually to ACF on how they plan to spend Title IV-B funds within specific categories. For fiscal year 2012, states nationwide planned to spend 32 percent of CWS funds on child protective investigations and related activities. Other common planned expenditure categories were family preservation services (18 percent), family support services (13 percent), time-limited family reunification services (11 percent), and foster care maintenance payments (10 percent). States reported spending 93 percent of PSSF funds in these categories for fiscal year 2009. States are not required to report actual expenditures for the CWS program. material supports, such as emergency rent assistance. For instance, one Virginia locality reported spending PSSF family support funds for a home visiting program designed to reduce the risk of abuse and neglect by first- time mothers, and a parenting academy for individuals ordered by the court to attend parenting classes and others found to have neglected or abused their children. Minnesota distributed some PSSF funds to localities through competitive matching grants targeted at two service areas, and additional funds to all localities for differential response initiatives. The first of these areas focused on family group decision-making practices designed to increase family involvement in decisions about their children’s care needs. The second of these areas focused on services to “screened out” families, or families who would not otherwise qualify for ongoing case management or services due to relatively low abuse or neglect risk levels. Some funds were also distributed to localities to support their differential response practices. Minnesota officials said the state began encouraging localities to implement differential response in the early 2000s, and PSSF funds played an integral role in these efforts. State officials said that because Minnesota localities administer and largely fund their own child welfare programs, they had to find creative ways to develop incentives for localities to adopt a differential response model. They decided to leverage PSSF funds along with funding from a private donor to initiate a 4-year pilot project that established differential response in 20 counties. New Mexico officials told us their next round of family support contracts would be targeted at services to birth families, but services would also be available to foster families. develop this strategy to improve its performance. The state’s family preservation contracts covered up to 4 months of intensive in-home services designed to prevent the need to remove children to foster care in families with high levels of safety and risk concerns in eight counties. In addition, time-limited family reunification contracts covered intensive services designed to enable families in 11 counties to reunite with children in foster care within 4 months of referral. New Mexico used PSSF adoption promotion and support funds statewide for activities including home studies, parent training, and a social networking site for adoptive parents. Nationally, most states supplement Title IV-B funds with other federal funding that is not dedicated to child welfare, according to expenditure data states reported to ACF. States use widely varying approaches and make different choices about how to spend the federal dollars they receive due to a variety of competing demands. As seen in figure 2 below, our selected states each used different combinations of federal funds not dedicated to child welfare to support services and other activities covered under Title IV-B in state fiscal year 2011. These funding sources were chiefly TANF, SSBG, and Medicaid. Officials in these states told us that they first used the most restrictive federal sources for activities that meet funding criteria and, after those costs were covered, they used more flexible sources to support services and other activities as needed. Most states across the country, including two of our selected states, chose to use TANF funding for child welfare services and other activities covered by Title IV-B. TANF is a federal block grant that supports four overarching goals, one of which is to provide assistance to needy families so that children can live in their homes or the homes of relatives. Because TANF funds can be spent on essentially any service for eligible families that aims to achieve one of the program’s four goals, it offers states flexible funding that can be used to support child welfare activities. According to national data reported by states to ACF, in the spring of 2011, 31 states spent TANF funds, including state maintenance of effort funds, for purposes covered by Title IV-B. For fiscal year 2011, we estimate these expenditures to have been at least $1.5 billion. Moreover, nationally states reported spending these funds for a variety of purposes. For example, 16 states reported using TANF funds for in-home services, family preservation services, or both. Another 9 states reported using TANF for child protective investigations and related activities. Among the four states we studied, Virginia spent TANF funds on family support and family preservation programs. Florida used TANF for a number of different purposes including case management, child protective investigations, and a state-sponsored home visiting program. New Mexico and Minnesota, in contrast, did not use TANF for child welfare. New Mexico officials said that their state had a relatively high poverty rate and spent most of its TANF funds on cash assistance.result, New Mexico officials said they had few TANF funds available for other purposes—including child welfare. Most states, including all four of our selected states, also used SSBG funds for child welfare services and other activities. SSBG is a federal block grant under which states are provided funding to support a diverse set of policy goals. abuse and neglect, preventing or reducing inappropriate institutional care, and achieving or maintaining self-sufficiency. In addition to their annual SSBG allotments, states are permitted to transfer up to 10 percent of their TANF block grant to SSBG. According to ACF data, 44 states including the District of Columbia spent fiscal year 2010 SSBG funding (including TANF transfer funds) in three reporting categories covered by Title IV-B. (Fiscal year 2010 was the most recent year for which national SSBG expenditure data were available.) Specifically, 35 states reported spending $377 million for services to children in foster care and other related activities, which accounted for 13 percent of total SSBG expenditures. Covered activities included, but were not limited to, counseling, referral to services, case management, and recruiting foster parents. Thirty-nine states also reported using $290 million in SSBG funds (10 percent) for child protective investigations and related activities, such as emergency shelter, initiating legal action (if needed), case management, and referral to service providers. Twenty-two states reported spending $31 million on adoption services and other activities, such as counseling, training, and recruiting adoptive parents. For fiscal year 2012, Congress appropriated $1.7 billion in SSBG funds. the states we selected to study used SSBG for services and other activities covered by Title IV-B. In state fiscal year 2011, New Mexico used SSBG for purposes including administrative costs associated with child protective investigations, foster care, and adoptions. In that same year, Florida spent SSBG funds on purposes including child protective investigations, child legal services, and the state’s hotline for reporting abuse and neglect. Nationwide, some child welfare agencies also claimed federal Medicaid reimbursement for services they provide to Medicaid beneficiaries. The amount of federal Medicaid reimbursement claimed by child welfare agencies is unknown. Under the Medicaid targeted case management benefit, child welfare agencies can be reimbursed for case management activities designed to assist targeted beneficiaries in gaining access to needed medical, social, educational, and other services. One of our selected states, Minnesota, claimed $24 million in federal reimbursement for Medicaid targeted case management for children at risk of placement in foster care and their families in calendar year 2011. Another selected state, Virginia, reported claiming $1.9 million in federal Medicaid reimbursement for targeted case management activities related to children in foster care in state fiscal year 2011. Child welfare agencies may also obtain federal reimbursement for services they provide to Medicaid beneficiaries covered under home and community-based service waivers. Under these waivers, states may cover a wide range of services and other activities to allow targeted individuals, such as children with developmental disabilities or serious emotional disturbances who would otherwise require institutional care, to remain at home or live in a community setting. Among our selected states, Minnesota claimed $1.8 million in federal reimbursement for services to children with disabilities under a home and community-based services waiver in calendar year 2011. Child welfare agencies can also claim federal Medicaid reimbursement for administrative case management activities, including making Medicaid eligibility determinations. Two of our selected states— Florida and New Mexico—claimed federal Medicaid reimbursement for administrative costs associated with case management activities. For example, Florida claimed $1.3 million in federal reimbursement for activities that included applying for Medicaid benefits and arranging appointments. Child welfare agencies nationwide also accessed other federal funding sources dedicated to child welfare to support services and other activities. These other dedicated federal funding sources included CAPTA and ACF discretionary grants. CAPTA funds can be used for a wide variety of purposes. For example, among our four selected states, New Mexico used a $136,000 CAPTA state grant for purposes including training, investigations, and case management in state fiscal year 2011. Florida spent about $1.4 million in CBCAP funds to support its chapter of a child abuse prevention organization, parent leadership and support groups, a child abuse prevention month campaign, and fatherhood initiatives. Other government entities whose missions intersect with those of child welfare agencies may also use federal funds for purposes covered under Title IV-B for children and families they serve. These entities, such as behavioral health agencies, housing authorities, and the courts, typically serve a broader population than children and families affected by abuse or neglect. However, some serve children and families who are also in the child welfare system. These entities may access a variety of federal funds to benefit these children and families. For example: Behavioral health agencies that oversee home visiting programs may use Maternal, Infant, and Early Childhood Home Visiting Program funds to provide home visiting services to families at risk of abuse or neglect. They may also access Substance Abuse Prevention and Treatment Block Grant funds for substance abuse treatment for individuals in the child welfare system, including pregnant women and women with dependent children. Housing authorities that participate in the U.S. Department of Housing and Urban Development’s (HUD’s) Family Unification Program may provide housing vouchers to families at risk of losing their children to foster care or who face difficulty achieving family reunification due to inadequate housing. Courts receive Court Improvement Program formula grants to improve the handling of child abuse and neglect cases. Although states are generally prohibited from funding services, such as parenting classes and substance abuse treatment, with Title IV-E funds, ACF has granted waivers permitting some states to do so. As of October 2012, 14 states had implemented or were approved to initiate Title IV-E waiver demonstration projects that allow them to use those funds for services covered by Title IV-B. These projects were designed to test new financing and service delivery approaches that may result in lower foster care costs and increased available funding for new or expanded services. States with waivers are required to ensure that their Title IV-E expenditures under the waiver do not exceed what they would have spent without a waiver. These states would be solely responsible for covering additional costs incurred if the number of children in foster care, or costs of caring for such children, exceeded state estimates. States with active and recently approved waivers have used various methods to determine that their projects were cost neutral. First, states with flexible funding waivers agree to receive a capped (or fixed) amount of Title IV-E funding in exchange for flexibility to use those funds for an expanded array of services, similar to a block grant. In other states, the amount of funding received for children participating in the waiver project is determined by the average amount of funding received for children in a control group who are not receiving waiver services, ensuring that funding for the waiver group is comparable to what it would have been without the waiver. The goals of each Title IV-E waiver project vary and include: (1) reducing the time children and youth spend in foster care and promoting successful transition to adulthood for older youth, (2) improving child and family outcomes, and (3) preventing child abuse and neglect, and the re-entry of children and youth into foster care. ACF encouraged states to develop projects that included evidence-based and evidence-informed practices to promote children’s social and emotional well-being and to collaborate with state Medicaid agencies when possible. Approved waiver projects reflect these priorities in a variety of ways (see figure 3). For instance, Illinois plans to provide specialized training to parents and other caregivers of very young children in Cook County who exhibit effects of trauma, using a control- and treatment-group design. Wisconsin plans to implement post-reunification support services, including evidence-based therapies designed to address trauma (trauma-informed care), for families reunified after foster care. Among the four states we studied, only Florida had an active Title IV-E demonstration waiver project. Florida’s waiver demonstration project was implemented in 2006 as part of a statewide reform effort that included transferring management of child welfare cases to community-based lead agencies that work with a network of purchased service providers after the state has concluded its initial child protective investigation. Florida’s demonstration waiver goals are to: (1) improve child and family outcomes, (2) expand the array of community-based services and increase the number of children eligible for services, and (3) reduce administrative costs related to service provision. Vargo et al., IV-E Waiver Demonstration Evaluation Final Evaluation Report SFY 11-12, a Title IV-E waiver evaluation submitted to the Florida Department of Children and Families, March 15, 2012. these placement expenditures.funding restrictions such as these seem misaligned with federal policy principles and fail to create incentives for states to invest in services designed to prevent foster care placement. In New Mexico, a state official said that increased Title IV-E flexibility would allow them to expand investments in services that prevent foster care placement. At the same time, another state official said that Title IV-E funds were an important source of guaranteed support for children in foster care, and cautioned that New Mexico may have difficulty ensuring that adequate resources are devoted to those children if Title IV-E funds are used for different purposes. Some experts and policymakers have also suggested reforms to how child welfare services are funded and have put forth proposals that would change the way states can use Title IV-E funding. These proposals include instituting various mechanisms for allowing states to increase their focus on services that aim to keep families together while also preserving adequate funding for those children who must be placed in foster care. Data from a national survey conducted by ACF indicate that not all children and families in the child welfare system receive the services they need. The survey included interviews with a sample of over 5,000 children and caregivers with child protective investigations closed between February 2008 and April 2009. Many of these children and caregivers reported that they had not received services for which they had a demonstrated need in the 12 months prior to being interviewed. For instance, an estimated 91 percent of caregivers who needed substance abuse services had not received them (see table 3). Additionally, an estimated 58 percent of younger children and 48 percent of adolescents at risk for behavioral, emotional, or substance abuse problems had not received any behavioral health services during this same time period. ACF reviews of state child welfare systems also suggest that children and families may not receive the services they need. ACF’s most recent Child and Family Services Reviews, conducted from fiscal years 2007 to 2010, showed that 20 of 52 states did not have an appropriate range of services to adequately identify and address the needs of children and families. ACF defined an appropriate range of services as those that help create a safe home environment and enable children to remain at home when reasonable, and help find other permanent homes for foster and adopted children. ACF officials told us that, while the reviews do not include formal data on the availability of specific services, their reports on individual states indicate that the most commonly unavailable services included: behavioral health services, including child psychologists and psychiatrists; substance abuse treatment for adults and youth; housing; and domestic violence services. In a survey funded by Casey Family Programs, 25 out of 41 of state child welfare agencies responding reported waiting lists for at least one service provided by child welfare agencies or their purchased service providers. (This survey did not ask states about the length of time a child or family remained on the waiting list before receiving services.) These services included in-home services, home visiting services, and substance abuse assessment and treatment. The absence of a waiting list, however, does not necessarily indicate that services are available. A service provider may not maintain a waiting list even if there are families waiting to be served. Officials from our 13 selected localities echoed these concerns. In response to a GAO data collection instrument, most of these localities reported key service gaps in the areas of substance abuse assessment and treatment services; assistance with material needs, such as housing and transportation; and in-home services (see figure 4). Service gaps can negatively affect outcomes for children and their families. Specifically, according to officials in our 13 selected localities, previous GAO work, and some research, service gaps can complicate efforts to prevent placement in foster care, hinder chances of reunification after foster care, and harm child well-being. Officials in our selected localities reported that difficulty securing high- quality, timely treatment for families with parental substance abuse problems can decrease the likelihood of recovery and reunification. In 6 of 13 selected localities, officials reported waiting lists for substance abuse treatment services. Officials in one of these localities noted that clients often wait 2 to 3 months for these services. Further, officials in five localities said that available inpatient services were of poor quality or too short in duration to meet client needs. New Mexico officials told us their state’s behavioral health entity covered a maximum of 30 days of inpatient substance abuse treatment, which they said is insufficient for long-term addicts. Some research corroborates the views of local officials that lack of access to timely, intensive treatment may negatively affect a family’s chances of reunification. A 2007 study of nearly 2,000 women in Oregon who were substance abusers and had children in foster care found that mothers were more likely to be reunited with their children if they entered treatment quickly and spent more time in treatment. Similarly, in California, a study of more than a thousand mothers who participated in a drug treatment program in 2000 found that mothers who completed or spent at least 90 days in treatment were about twice as likely to reunify with their children as those who spent less time in treatment. GAO previously reported on family-centered residential drug treatment programs, which can last up to 24 months and may allow women to bring their children with them. These programs help women address issues underlying their substance abuse, build coping strategies, and enhance parenting skills, which can reduce chances that children will need to be removed to foster care. The Substance Abuse and Mental Health Services Administration (SAMHSA) evaluated performance data from residential treatment programs for mothers and found that 6 months after treatment ended, fewer children of participating women were living in foster care and most children who accompanied their mothers to treatment were still living with them. Officials in several of our selected localities also said it could be difficult for families experiencing substance abuse to achieve reunification within Delays in receiving treatment can make it difficult mandated deadlines. for treatment to be completed within these deadlines. In addition, a previous GAO report found that mandated reunification deadlines can conflict with the amount of time required to successfully address the needs of these families.high-quality, evidence-based treatment is essential to achieving reunification within mandated timelines. Officials in one selected locality said they frequently terminate parental rights due to parents’ inability to establish sobriety within limited time frames. However, officials in another locality reported that judges are sympathetic to substance-abusing One ACF official told us that timely access to parents’ efforts to engage in services, and frequently extend their permanency deadlines. According to past GAO work and officials in selected localities, lack of affordable housing may also contribute to children’s removal into foster care or may prevent families from reunifying. In 2007, GAO surveyed 48 state child welfare directors about African American children in foster care. Officials from 25 states cited a lack of affordable housing options as one factor that contributed to disproportionately high rates of foster care placement among African American children in the child welfare system. This report found that affordable public housing is a critical support that can help low-income families stay together. 13 selected localities told us that a parent’s inability to obtain housing could prevent family reunification even if all other reunification criteria had been met. However, officials in one locality said they work with families to find them appropriate housing and would not keep a child from his or her parents based solely on the family’s housing situation. Similarly, officials in 3 of our GAO previously reported that failure to provide services to address the trauma of abuse or neglect may negatively affect children’s well-being in both the short and long term. GAO reported that children may experience traumatic stress as a result of maltreatment, which significantly increases their risk of mental health problems, difficulties with social relationships and behavior, physical illness, and poor school performance. Early detection and treatment of childhood mental health conditions can improve children’s symptoms and reduce the likelihood of negative future outcomes, such as dropping out of school or becoming involved in the ACF has also made the social and emotional juvenile justice system. GAO, African American Children in Foster Care: Additional HHS Assistance Needed to Help States Reduce the Proportion in Care, GAO-07-816 (Washington, D.C.: July 11, 2007). well-being of children receiving child welfare services an agency priority, and is encouraging child welfare agencies to focus on improving behavioral and social-emotional outcomes for the children they serve. Officials from 8 of our 13 selected localities reported a shortage of substance abuse treatment providers. Some officials cited a shortage of treatment in general, while others discussed shortages of specific kinds of treatment. For instance, officials from six localities reported an inadequate number of inpatient treatment providers. Officials from two selected localities also reported particular difficulty finding providers that offered appropriate substance abuse treatment services for adolescents. In one Virginia locality with extremely high rates of substance abuse, officials said that in order to address this shortage, their local behavioral health agency had hired a counselor dedicated to treating youth with substance abuse problems. However, this counselor served five counties and had difficulty keeping up with demand. Officials from nine selected localities as well as three state officials from our discussion group reported a shortage of mental health service providers. Officials from six localities and three state officials from our discussion group also noted shortages of certain types of specialists. For example, officials from multiple localities in three out of four selected states reported acute shortages of child psychiatrists. One state official who participated in our discussion group also reported particular difficulty finding mental health providers who offered evidence-based therapies specifically designed to address trauma in children. Officials in one Florida locality said that service providers in their community were interested in becoming trained in certain evidence-based practices, but found it too costly to do so. To address mental health provider shortages, Minnesota’s Department of Human Services contracted with the Mayo Clinic to provide phone-based psychiatric consultation services to primary care doctors across the state. Officials said the initiative would improve the quality of psychiatric care for children, including children in the child welfare system. Officials in several localities across three states and from one state in our discussion group reported a shortage of mental health, substance abuse, and/or other service providers who accept Medicaid. GAO has previously reported on this issue. In a recent survey of states, GAO found that 17 states reported challenges ensuring enough mental health and substance abuse providers for Medicaid beneficiaries. Additionally, GAO found in 2011 that more than three times as many primary care physicians reported difficulty referring children enrolled in Medicaid or the Children’s Health Insurance Program (CHIP) to specialists as compared with privately-insured children. Finally, provider shortages were cited as particularly challenging in rural areas. Officials in localities across all four selected states and two state officials from our discussion group reported provider shortages in rural areas. Officials from some rural localities described difficulty attracting and retaining service providers. A local Florida official said that one of his agency’s behavioral health purchased service providers had been advertising a child psychiatrist position for 5 years without success. In several localities, officials said provider shortages often result in families traveling long distances to receive services in more urban areas. Inadequate health coverage among some children and families in the child welfare system also contributes to service gaps. In 6 of 13 localities, officials cited lack of health insurance as a factor contributing to difficulty securing medical services for families. Officials from selected localities reported that in some cases services were more difficult to obtain for parents than for their children, due to lack of health insurance. In addition, undocumented immigrants are not eligible for Medicaid and may lack private health insurance as well. Officials in several localities described particular difficulty obtaining services for these families. There are, however, a variety of approaches local officials reported using in order to obtain services for families. In some cases, agencies were able to turn to behavioral health agencies. And in one locality, officials said a local non-profit sometimes funded mental health assessments for clients without insurance. In other cases, officials said their agencies paid for these services with their own funds. Additionally, fewer parents in the child welfare system may lack health insurance after January 1, 2014, when states may expand eligibility for Medicaid coverage to non-elderly non-pregnant adults with incomes at or below 133 percent of the federal poverty level, as provided for under the Patient Protection and Affordable Care Act. Lack of transportation is also a widespread impediment to obtaining services, especially in rural areas. Officials in all selected agencies that served rural areas reported difficulty with transportation for rural clients, and discussion group participants from two additional states reported similar difficulties. A number of local officials said that providing services in the home could help mitigate transportation challenges, as well as allow providers to better assess and address challenges in the home environment. Some officials noted that in-home services are typically more expensive than office-based services. However, officials in one Florida locality reported that they had made in-home services a budgetary priority due to transportation challenges in their area. While state child welfare agencies receive reimbursement under Title IV- E for many costs related to children in foster care, funding for services designed to prevent the need to remove children from their homes and place them in foster care is more limited. State and local child welfare agencies may face difficult decisions when determining which of these prevention activities to prioritize and fund, particularly in light of the ongoing fiscal challenges states face. For instance, local officials in New Mexico described challenges in securing resources to provide services to children and families at risk of foster care placement. New Mexico state officials told us they contracted for services designed to avoid foster care placement or reunite families after foster care entirely with Title IV-B PSSF family preservation and reunification funds and did not allocate state or other federal funds to support these contracts. Because Title IV-B funds were limited, the state targeted services only to selected counties with the highest need. Officials in one New Mexico county said that most of their family preservation and reunification services were cut for fiscal year 2013, in part because they had been successful in reducing the number of children in foster care and were no longer considered a high need county. Fiscal challenges have also affected child welfare partner agencies. For example, one ACF official we interviewed noted that most states have experienced budget cuts in social services, which affect both child welfare and substance abuse services. In addition, officials from SAMHSA told us that since 2008, states have had more difficulty maintaining state funding of behavioral health services. Many localities experienced gaps in services provided by partner agencies, in some cases due to the fiscal constraints of those agencies. For example: Officials in 7 of the 13 localities, as well as one state official from our discussion group, said that their local housing authorities had long waiting lists (in some cases up to 3 or 4 years) for Section 8 housing vouchers. As a result, families referred to the housing authority often did not receive assistance. In a few localities, officials said that families with children in foster care could not obtain approval for public housing units until they had regained custody, which hindered efforts to reunite children with their families. One ACF official said that, in response to GAO’s inquiry, the agency initiated discussions with HUD about improving outreach to local housing authorities about this issue. Officials in two localities in different states, as well as one state official from our discussion group, noted that their state Medicaid programs required diagnoses of mental health disorders to cover services, even for very young children (ages 0 to 3 years). Officials stated that these requirements could make obtaining needed services difficult in some cases, and could result in inappropriate diagnoses in other cases. States may place appropriate limits on accessing services based on medical necessity or utilization control procedures. 42 C.F.R. § 440.230. However, in some cases, selected child welfare agencies coordinated with other service agencies to improve families’ access to services. For example: Three selected localities had coordinated with their local housing authority to apply for a grant through the federal Family Unification Program, which sets aside housing vouchers for families in the child welfare system. Two selected localities had Family Dependency Treatment Courts, which coordinate court, treatment, and child welfare services for child welfare cases in which parental substance abuse is a primary factor. One Virginia locality collaborated with partner agencies to use funding provided under the American Reinvestment and Recovery Act for homelessness prevention and rapid re-housing to help families at risk of eviction. As these funds were about to expire, they worked with community partners to identify other sources of funding to allow this homelessness prevention program to continue. There are also other opportunities on the federal, state, and local level for child welfare and partner agencies to coordinate to improve service delivery for children and families in the child welfare system. For instance: ACF awards regional partnership grants for projects designed to increase the well-being of, and improve the permanency outcomes for, children affected by substance abuse through interagency collaboration and program and service integration. In 2012, ACF awarded 17 new regional partnership grants and approved 2-year extensions for 8 of 53 grants awarded in 2007. In September 2012, ACF awarded five grants totaling $25 million for collaborative partnerships between child welfare agencies and housing/ shelter organizations. Grants were awarded to projects focused on improving safety, family functioning, and child well-being in families at risk of homelessness and child maltreatment. Also in 2012, ACF awarded nine grants totaling almost $29 million over 5 years for projects to improve the social and emotional well- being of children and youth in the child welfare system. The purposes of these grants, which are in the form of cooperative agreements, include improving adoption outcomes through interagency collaboration and supporting child welfare agencies in assessing children’s mental and behavioral health needs. The Commissioner of ACF’s Administration on Children, Youth and Families told us the agency is encouraging states to collaborate with state Medicaid agencies to solve issues affecting families in the child welfare system, including barriers to accessing Medicaid-funded mental health services for infants. As an example, he said ACF’s most recent Title IV-E waiver announcement encouraged state child welfare agencies to submit proposals in conjunction with state Medicaid agencies. According to the Commissioner, six out of nine approved waiver proposals explicitly indicate a partnership with the state Medicaid agency. In fiscal year 2012, SAMHSA awarded 16 grants totaling almost $16 million to implement systems of care (which involve collaboration across government and private agencies, providers, and families) for children and youth with serious emotional disturbances. According to agency officials, 14 grantees were coordinating with child welfare agencies to address service development, funding, and access to care for children and youth in the child welfare system and those at risk of abuse or neglect. Child welfare agencies, like other state agencies, operate in an environment of ongoing fiscal constraint. They must make difficult choices about how to allocate their limited resources to support services critical to ensuring children’s safety and well-being. Despite their use of Title IV-B funding in combination with other federal dollars to supplement their state and local funds, these agencies continue to struggle to meet the complex needs of children not in foster care and their families. Given current state and federal fiscal constraints, they will likely continue to struggle. The waivers HHS has granted to some states to use their Title IV-E funding more flexibly may provide useful information about the effects of shifting available resources from foster care costs to support services intended to reduce the need for foster care without increasing funding overall. We provided a draft of this report to the Secretary of Health and Human Services for review and comment. HHS indicated in its general comments, reproduced in appendix I, that it agreed with GAO’s finding that gaps exist in services to address the effects of child maltreatment, and provided additional information about the agency’s emphasis on trauma-informed care and its efforts to encourage child welfare agencies to respond more effectively to trauma. The agency also agreed with GAO’s concluding observations that ongoing fiscal constraints contribute to challenges in meeting the needs of children and families, and offered two steps child welfare agencies could take to more effectively use available resources: (1) identify currently funded services that do not yield desired results and shift resources toward evidence-based programs and practices; and (2) use outcomes (specifically those related to child well- being), rather than services delivered, to measure program success. Our report did not address the effectiveness of specific services; however we agree that information about effective practices is an important tool that child welfare agencies can use to determine how best to allocate available funds. Additionally, our work has long shown that using outcomes is an important component of measuring program success. HHS also discussed the use of TANF funds for child welfare purposes in its comments, and noted that in addition to the services described in our report, TANF funds are spent on foster care maintenance payments and adoption subsidies, as well as relative foster care maintenance payments and guardianship subsidies. Because this report focuses on expenditures for services typically covered under Title IV-B, we did not include maintenance payments and adoption subsidies in the scope of our review. We have clarified that, due to the similarity among these payment types, we excluded relative maintenance payments and guardianship subsidies as well. HHS also noted that states may spend federal TANF funds on purposes authorized solely under prior law that do not meet a TANF purpose, and that many of these expenditures are for child welfare purposes. We have also clarified that our analysis includes these expenditures, as appropriate. Finally, HHS described planned revisions to its TANF expenditure reporting form to capture more detailed information about how states spend TANF funds on child welfare payments and services. We have not reviewed these plans; however, we recently recommended that HHS develop a detailed plan and specific timelines to help monitor its progress in revising these TANF reporting categories. In addition to these general comments, HHS also provided us with technical comments that we incorporated, as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. In addition to the contact name above, the following staff members made key contributions to this report: Elizabeth Morrison, Assistant Director; Lauren Gilbertson; James Lloyd; Erin McLaughlin; Ellen Phelps Ranen; and Deborah Signer. Also contributing to this report were: Susan Anthony, Jeff Arkin, Carl Barden, James Bennett, Jessica Botsford, David Chrisinger, Kim Frankena, Ashley McCall, Phillip McIntyre, Jean McSween, Almeta Spencer, Hemi Tewarson, James Rebbe, and Carolyn L. Yocom.
In fiscal year 2011, over 675,000 children were found to be victims of abuse or neglect. To help ensure that such children have safe and permanent homes, state and local child welfare agencies secure child welfare services, such as parenting classes and substance abuse treatment. Title IV-B of the Social Security Act is the primary source of federal funding designated for child welfare services that is available to states. In fiscal year 2012, Congress appropriated $730 million under Title IV-B. Although states augment these funds with state, local, and other federal funds, some children and families may not receive the services they need. Congress mandated that GAO provide information about the funding and provision of child welfare services. This report addresses: (1) how selected states use funds provided under Title IV-B, (2) what alternative sources of federal funding states use to fund child welfare services and other activities covered under Title IV-B, and (3) what services, if any, child welfare agencies have difficulty securing for children and their families. To answer these questions, GAO reviewed relevant laws, regulations, guidance, and reports; analyzed HHS expenditure data and program evaluations; and interviewed HHS officials, child welfare experts, and state and local child welfare officials in 4 states and 13 localities selected to illustrate a variety of approaches to financing and delivering services. GAO also reviewed state fiscal year 2011 expenditure data from selected states and administered a data collection instrument to selected localities. The four states GAO selected used funds provided under Title IV-B of the Social Security Act for a variety of child welfare services and other activities, and had different strategies for spending these funds. For instance, in fiscal year 2011 Virginia provided funding to all local child welfare agencies to spend on their own priorities, such as parenting classes. New Mexico targeted certain counties for services, such as intensive in-home services for families at risk of foster care. States nationwide also use other federal funds, such as Temporary Assistance for Needy Families (TANF) and Social Services Block Grant (SSBG) funds, as well as Medicaid, for purposes covered under Title IV-B. In the spring of 2011, 31 states reported spending TANF funds, and in fiscal year 2010, 44 states reported spending SSBG funds on these purposes. Some states also claim federal Medicaid reimbursement for activities covered under Title IV-B. One selected state, Minnesota, claimed reimbursement for case management for children at risk of foster care placement in 2011. Funds authorized under Title IV-E of the Social Security Act make up the large majority of federal child welfare funds, but are designated for purposes such as providing room and board payments for children in foster care and subsidies to adoptive parents, and generally cannot be used for child welfare services. However, 14 states have waivers allowing them to use these funds more flexibly to improve child and family outcomes. Among GAO's selected states, Florida had a waiver allowing it to use some Title IV-E funds for in-home services designed to prevent foster care placement. Many services, including substance abuse treatment and assistance with material needs, such as housing, are difficult for child welfare agencies to secure due to a variety of challenges. A 2008-2009 U.S. Department of Health and Human Services (HHS) survey that sampled children and families in the child welfare system found that many did not receive needed services. For example, an estimated 58 percent of children age 10 and under at risk of emotional, behavioral, or substance abuse problems had not received related services in the past year. Local child welfare officials in four selected states reported service gaps in multiple areas. Service gaps may harm child wellbeing and make it more difficult to preserve or reunite families. For example, officials from one locality noted 2- to 3-month wait times for substance abuse services. Due to the chronic nature of the disease, delays in receiving services may make it more difficult to reunify families within mandated deadlines. Officials cited factors contributing to service gaps that included provider shortages and lack of transportation. Additionally, officials noted difficulty securing services from partner agencies, such as housing authorities. State fiscal constraints, which affect both child welfare and partner agencies, contribute to such difficulties.
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard has a variety of responsibilities including port security and vessel escort, search and rescue, and Polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels, aircraft, and information technology systems. Since 2001, we have reviewed Coast Guard acquisition programs and reported to Congress, DHS, and the Coast Guard on the risks and uncertainties inherent in its acquisitions. Several of our reports have focused on the Coast Guard’s former Deepwater acquisition program that was created to build and modernize ships, aircraft, and other capabilities. In our July 2011 report on the Deepwater program, we found that the program continues to exceed the cost and schedule baselines approved by DHS in 2007, but that several factors precluded a solid understanding of the program’s true These factors included approved acquisition program cost and schedule.baselines that did not reflect the current status of some programs, unreliable cost estimates and schedules for selected assets, and a mismatch between funding needed to support all approved Deepwater baselines and expected funding levels. We concluded that while the Coast Guard has strengthened its acquisition management capabilities, it needed to take additional actions to address the cost growth, schedule delays, and expected changes to planned capabilities. The Coast Guard’s current acquisition portfolio includes 16 major acquisition programs—12 of which were part of the former Deepwater program. Major acquisitions—level I and level II—have life-cycle cost estimates equal to or greater than $1 billion (level I) or from $300 million to less than $1 billion (level II) as outlined in the Coast Guard’s Major Systems Acquisition Manual. Table 1 provides further information about the Coast Guard’s major acquisition programs. Three key Coast Guard directorates—capabilities, resources, and acquisition—are involved in the major acquisition process. Program managers in the acquisition directorate are required to integrate input from these three directorates into a coherent strategy to achieve specific cost, schedule, and performance parameters for their programs. Figure 1 identifies some key documents that program managers use in this process and, according to the Major Systems Acquisition Manual, what should happen if a program manager’s cost estimate for achieving requirements established by the capabilities directorate does not match Coast Guard’s approved or proposed budget. Additionally, major acquisition programs are to receive oversight from DHS’s Investment Review Board, which is responsible for reviewing acquisitions for executable business strategies, resources, management, accountability, and alignment to strategic initiatives. The Board also supports the Acquisition Decision Authority in determining the appropriate direction for an acquisition at key Acquisition Decision Events (ADE). At each ADE, the Acquisition Decision Authority approves acquisitions to proceed through the acquisition life-cycle phases upon satisfaction of applicable criteria. Further, Component Acquisition Executives at the Coast Guard and other DHS components are responsible in part for managing and overseeing their respective acquisition portfolios. DHS has a four-phase acquisition process: Need phase—define a problem and identify the need for a new acquisition. This phase ends with ADE-1, which validates the need for a major acquisition program. Analyze/Select phase—identify alternatives and select the best option. This phase ends with ADE-2A, which approves the acquisition to proceed to the obtain phase and includes the approval of the acquisition program baseline. Obtain phase—develop, test, and evaluate the selected option and determine whether to approve production. During the obtain phase, ADE-2B approves a discrete segment if an acquisition is being developed in segments and ADE-2C approves low-rate initial production. This phase ends with ADE-3 which approves full-rate production. Produce/Deploy/Support phase—produce and deploy the selected option and support it throughout the operational life cycle. Figure 2 depicts where level I and II Coast Guard assets currently fall within these acquisition phases and decision events. In conjunction with the management of these programs through the acquisition process, the Coast Guard and DHS have also undertaken a series of studies in the past several years focused on requirements and the mix of assets in the Coast Guard’s acquisition portfolio. Many of these studies have primarily focused on the assets that were part of the Deepwater program, commonly referred to by the Coast Guard as the program of record: In September 2003, the Coast Guard completed a performance gap analysis that determined the Deepwater fleet would have significant capability gaps in meeting emerging mission requirements following the September 11, 2001, terrorist attacks. Due to fiscal constraints, the Coast Guard decided not to make any significant changes to the planned Deepwater fleet, but did approve several asset capability changes that were reflected in the 2005 Mission Need Statement, which outlines capabilities the Coast Guard needs to meet its mission demands. In December 2009, the capabilities directorate completed a fleet mix analysis which was intended to be a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its Deepwater mission. In May 2011, the capabilities directorate completed a second fleet mix analysis which primarily assessed the rate at which the Coast Guard could acquire the program of record within a range of cost constraints. In August 2011, DHS completed a cutter study which developed alternative cutter fleets that equaled the acquisition cost, at the time of the analysis, of the cutter fleet program of record, and assessed the expected performance of these alternative fleets compared to the program of record. In July 2011, we reported that it was unclear how DHS and the Coast Guard would reconcile and use these multiple studies to make trade-off decisions. We recommended that the Secretary of the Department of Homeland Security develop a working group that includes participation from DHS and the Coast Guard’s capabilities, resources, and acquisition directorates to review the results of the studies to identify cost, capability, and quantity trade-offs that would produce a program of record that fits within expected budget parameters. DHS concurred, but has not yet implemented this recommendation; the Senate Report accompanying the 2013 DHS Appropriations Bill directs the DHS and the Coast Guard to develop this working group. Outdated acquisition program baselines and uncertainty surrounding the affordability of the Coast Guard’s acquisition portfolio continue to limit visibility into the current cost and schedule of the Coast Guard’s major acquisitions. Even though the Coast Guard has revised 15 out of 16 baselines in its major acquisition portfolio at least once, 10 of those 15 baselines do not reflect the current cost or schedule of the programs. According to the acquisition program baselines that are approved as of July 2012 and total program cost for programs with no planned funding beyond fiscal year 2014, the Coast Guard is managing a portfolio of major acquisitions that could cost as much as $35.3 billion—or 41 percent more than the original estimate of $25.1 billion—but the majority of these baselines do not reflect the current status of these programs. DHS and the Coast Guard have acknowledged that affordability of the Coast Guard’s portfolio is a challenge, but the mismatch between resources needed to support all approved baselines and anticipated funding levels continues to affect Coast Guard acquisitions. Some of this mismatch could be alleviated by the Coast Guard’s current five-year budget plan which does not include the final two National Security Cutters; however, Coast Guard officials have stated that, regardless of this plan, it continues to support completing the program of record. A decision to pursue the final two National Security Cutters in the near-term budget years could have significant portfolio-wide implications. The Coast Guard has revised baselines for 15 of the 16 programs in its major acquisition portfolio at least once; however, 10 of the 15 revised baselines do not reflect the current cost or schedule of the programs. We found that the revised baselines do not reflect current cost and schedule for one or more of the following reasons: Program reported a cost or schedule breach to DHS, but does not have a DHS-approved baseline to reflect corrective actions for the breach as required. Seven out of 16 programs in the Coast Guard’s major acquisition portfolio fall into this category. The dates of these breach notifications range from April 2009 through December 2011. Program has changed in scope, which could have cost and/or schedule implications, but its DHS-approved baseline does not reflect these changes. Two out of 16 programs in the Coast Guard’s major acquisition portfolio fall into this category. Program does not expect to receive funding beyond fiscal year 2014, but its DHS-approved baseline still reflects such funding. Four out of 16 programs in the Coast Guard’s major acquisition portfolio fall into this category. Based on the fiscal years 2013-2017 capital investment plan, Coast Guard officials do not anticipate funding for these programs through fiscal year 2017 which means the programs cannot execute their current baselines as planned. These outdated baselines do not provide DHS, Coast Guard, and Congress with accurate information about the current cost and schedule of the Coast Guard’s major acquisition portfolio. According to the Major Systems Acquisition Manual, the acquisition program baseline provides a critical reference point for measuring and reporting the status of program implementation and revised baselines should be submitted to DHS within 90 days after reporting a breach. Coast Guard officials acknowledged that the approved baselines do not reflect the status of many programs, but stated the update process is lengthy and sometimes interrupted by decisions made in the budget process each year. For example, the National Security Cutter program office formally notified DHS of a cost and schedule breach in November 2011 and program officials told us that Coast Guard leadership is reviewing a draft baseline. However, officials stated that the draft baseline may no longer be valid because it was based on a funding profile that was changed in the fiscal year 2013-2017 capital investment plan submitted to Congress, triggering the need to update the baseline once again. Likewise, in response to our request for current cost estimates and schedules for each program, senior resource directorate officials told us that current estimates were not available for release because they did not know how they would be affected by future funding allocations. Without a stable funding profile, program managers will likely always be at a disadvantage as they must frequently update baselines based on the budget rather than having a stable budget reflecting program baselines. Furthermore, our prior Department of Defense (DOD) work has found that balancing investments late in the budget process often leads to additional churn in programs, such as increased costs and schedule delays, and encumbers efforts to meet strategic objectives. We made a recommendation in July 2011 that the Coast Guard adopt action items found in the acquisition directorate’s October 2010 Blueprint for Continuous Improvement (Blueprint) such as promoting stability in the capital investment plan by measuring the percentage of projects stably funded year to year in the plan, ensuring acquisition program baseline alignment with the capital investment plan by measuring the percentage of projects where the acquisition program baselines fit into the capital investment plan, and establishing project priorities as a Coast Guard-wide goal. By promoting stability in the capital investment plan, the Coast Guard may be able to address the churn in the acquisition program budgeting process and help ensure that programs receive and can plan to a more predictable funding stream. DHS concurred, but has not yet fully implemented this recommendation. Coast Guard officials told us that the acquisition directorate did develop a metric to measure the percentage of programs stably funded from year to year, which confirmed wide fluctuations in funding for most programs from year to year. However, it is unclear whether the Coast Guard will pursue the remaining action items. While Coast Guard officials acknowledged that baselines for many of its major acquisitions do not reflect the current status of the programs, even using the approved program baselines as of July 2012 and total program cost for programs with no planned funding beyond fiscal year 2014, the estimated total acquisition cost of Coast Guard major acquisitions could be as much as $35.3 billion. This is about $10 billion more than original baselines which totaled $25.1 billion and represents an increase of approximately 41 percent. Figure 3 compares each major acquisition asset’s cost from the original program baseline with the latest revised baselines that have been approved by the Coast Guard, if available. For those programs with no planned funding beyond fiscal year 2014, figure 4 compares the original baseline with estimated total program cost based on budget data. As we have previously reported, the cost increases associated with many of these revised baselines reflect the Coast Guard’s and DHS’s efforts to better understand the acquisition costs of individual assets that formerly made up the Deepwater program, as well as provide insight into the drivers of cost growth. For example, the Coast Guard has attributed the more than $1 billion rise in the Fast Response Cutter’s cost to a reflection of actual contract cost from the September 2008 contract award and costs for shore facilities and initial spare parts not included in the original baseline. Another example of the Coast Guard gaining more insight into the cost of individual assets is the Offshore Patrol Cutter program. The initial Deepwater baseline included an $8 billion estimate for the Offshore Patrol Cutter program. However, program officials stated they did not have good data for how the lead systems integrator for the Deepwater program generated the original estimate, and that the current estimate approved by DHS in April 2012—with a threshold of approximately $12 billion—is higher likely because the original estimate was developed before the program requirements were established. Program officials also cited delays in the program, and the corresponding inflation associated with those delays, as additional reasons for the cost increase. Even though the Coast Guard used the original 2007 Deepwater Baseline estimate of $8 billion to characterize the expected cost of the program multiple times to Congress, it now characterizes the revised acquisition program baseline as the initial cost estimate for the program. Without baselines that reflect current cost and schedule, DHS and the Coast Guard will not have adequate information to determine if the Coast Guard can afford other major acquisition programs that are expected to begin within the next few years. The Coast Guard is in the early stages of planning for several new acquisitions including icebreakers, river buoy tenders, and a biometrics-enabled identity program. In addition, officials at the Coast Guard’s Aviation Logistics Center told us they recently identified that the end of service life for the HH-60s and HH-65s could be reached as early as the 2022 time frame—not the 2027 time frame as originally planned. Officials added that this will require the Coast Guard to either buy new HH-60s and HH-65s or conduct a service life extension— previous service life extensions have been funded with acquisition dollars. Coast Guard officials told us that additional research is being conducted regarding the life expectancy of these helicopters, including using forecasting models to update service life limits. Regardless, officials also stated that the Coast Guard plans to maintain continuous operational capability. Furthermore, we recently reported that the medium endurance cutters may also need a service life extension program to limit operational gaps until the Offshore Patrol Cutters are in service. Given that the Coast Guard does not have adequate information concerning the cost of its current portfolio, it is not well positioned to accurately assess the affordability of these programs as requirements are developed for these new assets. The mismatch we reported in July 2011 between resources needed to support all approved program baselines and expected funding levels continues to affect the Coast Guard, requiring it to make decisions about which programs to fund and which programs not to fund as part of the annual budget formulation process. For example, in the fiscal year 2013 budget request, the following major acquisition programs were funded at a level lower than identified in the programs’ life cycle cost estimates for that year: Maritime Patrol Aircraft, Fast Response Cutter, HC-130J/H, and C4ISR. Combined, the Coast Guard requested approximately $500 million less than what was identified in the life cycle cost estimates for these programs. The funding needs for these programs have not gone away and the Coast Guard will have to fund those activities in future fiscal years. Both DHS and the Coast Guard have acknowledged this resource challenge, but efforts to address these challenges have not resulted in a clear strategy for moving forward. For example, in an April 2011 acquisition decision memorandum concerning Coast Guard acquisition program breaches, DHS stated that future breaches in Coast Guard programs would almost be inevitable as funding resources diminish. DHS also directed the Coast Guard to develop a plan for showing program tradeoffs that illuminates the balance between operational commitments, recapitalization, and the realities of the capital investment plan. Following the Coast Guard’s presentation of the plan to DHS, DHS issued a second acquisition decision memorandum in August 2011 which stated the Coast Guard presented a global, systematic, and overarching solution to future funding shortfalls that addressed programmatic, resource, and operational impacts. However, a senior DHS official involved with this review told us that the presentation only brought to light the challenges, and did not present a solution. The briefing slides provided to us were redacted due to the Coast Guard’s belief that they contained budget negotiation information so we were unable to reconcile whether a solution was presented. Coast Guard officials stated they had no other examples of a similar portfolio-wide review to address future funding shortfalls. Without a portfolio analysis to establish long-term priorities to guide the budget process, it will be difficult for Coast Guard to address this mismatch of funding and understand how decisions concerning one program affect another program. Some of the resource challenges in near-term years could be alleviated if the Coast Guard executed its fiscal year 2013-2017 capital investment plan. For example, this plan does not include funding for National Security Cutter 7 in fiscal year 2014 or National Security Cutter 8 in fiscal year 2015, as was the plan in previous years. However, resource and acquisition directorate officials told us that the Coast Guard continues to support a program of record of eight National Security Cutters. A senior Coast Guard acquisition official added that the Coast Guard has an urgent need for the last two cutters and not buying these two ships would require major adjustments to other acquisition plans. However, as seen in figure 5, if the Coast Guard chooses to pursue National Security Cutter 7 in fiscal year 2014 and National Security Cutter 8 in fiscal year 2015, there will be a significant mismatch in funding required based on life cycle cost estimates versus expected funding levels in the fiscal year 2013- 2017 capital investment plan—especially given that some of the activities not funded in fiscal year 2013 are expected to be funded in subsequent years. If National Security Cutters 7 and 8 are included in future budgets, decision makers will likely be faced with a difficult choice: pull funds from other high-priority federal programs to support Coast Guard acquisitions or accept that some capabilities the Coast Guard promised will have to be deferred to later years. However, deferring costs could lead to what is commonly characterized as a bow-wave—or an impending spike in the requirement for additional funds—unless the Coast Guard proactively chooses to make some tradeoff decisions by re-examining requirements. Coast Guard acquisition officials told us that one way it is trying to address portfolio affordability is through an update to its Major Systems Acquisition Manual. According to draft language, the acquisition directorate’s Office of Resource Management will be required to maintain a chart to visually depict all competing acquisition program priorities within the capital investment plan at various points in time. Officials told us that each acquisition program will be required to include this chart in its required materials for future acquisition decision events. This update to the Coast Guard’s acquisition manual follows best practices outlined in GAO’s Cost Estimating and Assessment Guide with the exception that the guide notes the affordability assessment should, preferably, be conducted several years beyond the programming period.the chart included in GAO’s Cost Estimating and Assessment Guide. Opportunities exist for the Coast Guard to address the affordability of the fleet and major cutters through the requirements process, which takes broad mission and capability needs and converts them to system-specific The Coast Guard completed two efforts to reassess the mix capabilities.of assets but both efforts only used its program of record, based upon the 2005 Mission Need Statement, as the basis of the analysis and did not consider realistic fiscal constraints. While the Coast Guard remains committed to this 2005 Mission Need Statement, it may not be on a path to achieve several of the capabilities necessary to respond to mission demands identified after September 11, 2001, or realize its vision for a presence-based operating concept. Combined with cost growth, the Coast Guard is at risk of pursuing a fleet that is not affordable and will not be able to operate in the manner envisioned. Balancing capability and affordability is also a concern for the Coast Guard’s and DHS’s largest acquisition, the Offshore Patrol Cutter—which Coast Guard officials stated is the first acquisition in the Deepwater surface fleet in which the Coast Guard had complete control over the requirements development process. However, even though the Coast Guard has made some changes to reduce the estimated acquisition cost of the Offshore Patrol Cutter, DHS Office of Policy and the Office of the Chief Financial Officer have expressed concern regarding future cost growth and the program crowding out other Coast Guard programs in future budget years. Further, the requirements and missions for the Offshore Patrol Cutter have similarities to those of the National Security Cutter though their costs vary at this time. GAO, Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies, GAO-12-751R (Washington, D.C.: May 31, 2012). found that the Coast Guard requires a fleet that could cost $65 billion to meet its long term strategic goals, which is about $40 billion more than the $24.2 billion program of record. Coast Guard officials told us that they do not consider the $65 billion fleet to be affordable and are not using it to inform decision making. In the second effort, Fleet Mix Phase Two, the Coast Guard analyzed how long it would take to buy the program of record under two different funding constraints: (1) an upper bound of $1.64 billion per year and (2) a lower bound of $1.2 billion per year for surface and aviation assets. Both of these bounds kept the aviation funding level constant at $350 million per year. As we reported in May 2012, and as shown in figure 7, both the upper and lower bound funding scenarios are greater than the Coast Guard’s past 5 years of appropriations and its fiscal year 2013 request, indicating the upper bound funding level is unrealistic and the lower bound is optimistic. The program of record that the Coast Guard remains committed to is based upon its 2005 Mission Need Statement, which Coast Guard officials told us serves as the guiding document for its recapitalization effort. This Mission Need Statement outlines capabilities the Coast Guard needs to meet its mission demands, including 11 capabilities established after September 11, 2001. In addition, it identifies those capabilities that would allow the Coast Guard to become more proactive through increased surveillance and presence, as opposed to responding to events after they occur.presence-based operating concept will lead to operations that detect and interdict threats as far from the United States as possible. According to the Mission Need Statement, this While the Coast Guard remains committed to this 2005 Mission Need Statement, it may not be on a path to achieve several of the capabilities necessary to address gaps that emerged following the September 11, 2001, terrorist attacks. We traced 11 system performance capabilities identified in the 2005 Mission Need Statement through various program documents, including the 2007 Deepwater acquisition program baseline, operational requirements documents, and testing documentation to identify which capabilities the Coast Guard is currently planning to acquire. As seen in table 2, the Coast Guard’s progress in acquiring the capabilities identified in this document is mixed as it has acquired some capabilities while other capabilities have been refined or clarified over time, are no longer planned for certain assets, or have been cancelled altogether. In addition to these 11 capabilities, the Coast Guard also identified the need for persistent wide-area surveillance in the 2005 Mission Need Statement to achieve the presence-based vision. Two of the solutions required to enable this capability, in addition to the C4ISR system discussed in table 2, are data transmission capacity—or bandwidth—and Unmanned Aerial Systems. However, the Coast Guard has struggled to supply its assets with the bandwidth necessary to support information- based operations. Further, as we previously reported, the Unmanned Aerial Systems were envisioned as a key component of the Deepwater system that would enhance surveillance capability on board the National Security Cutter and Offshore Patrol Cutter and also from land. Congress has appropriated over $100 million since 2003 to develop an Unmanned Aerial System, but the Coast Guard terminated the program due to cost increases and technical risks in June 2007. According to Coast Guard officials, the Coast Guard established a partnership with the Navy’s Fire Scout program in October 2008 and has developed plans to install a system that will facilitate a future demonstration of the Fire Scout on the National Security Cutter. As an interim solution, the Coast Guard has proposed a non-major acquisition to purchase a smaller, less capable, and less costly unmanned aerial vehicle. In August 2012, the Coast Guard held a technical demonstration on board the National Security Cutter that experimented with a possible Navy solution, called the Scan Eagle, which may satisfy the Coast Guard’s need for a smaller, less capable unmanned aerial vehicle. The Coast Guard currently has plans for a more in-depth demonstration in fiscal year 2013. Due to these capability shortfalls, the Coast Guard is at risk of purchasing a fleet that will not be able to close all of the gaps identified following the September 11, 2001 terrorist attacks or fully conduct operations in a presence-based manner. While the 2005 Mission Need Statement presented a business case for the Coast Guard’s future investments, the Coast Guard has not re-examined the value of these assets in light of the difficult affordability decisions likely to come. By continuing to pursue some capabilities and not others without reevaluating the portfolio as a whole, the Coast Guard is increasing the risk that it may not accomplish the goals envisioned in 2005 and cannot ensure it is maximizing the value of the assets it is buying. The Coast Guard took some steps to improve the requirements development process for the Offshore Patrol Cutter—the largest acquisition in DHS’s acquisitions portfolio and, according to officials, the first acquisition in the Deepwater surface fleet in which the Coast Guard had complete control over the requirements development process. The Coast Guard undertook studies and analysis that, in part, considered the measurability and testability as required by guidance of the following four key performance parameters: operating range, operational sustainment and crew, speed, and patrol endurance. For example, the range requirement, which is the distance the cutter can travel between refueling, is clearly stated as a minimum acceptable requirement of 8,500 nautical miles at a constant speed of 14 knots to a maximum level of 9,500 nautical miles. Although cutters typically transit at various speeds over the course of a patrol, the Coast Guard conducted analysis to determine that the 14 knots speed at the minimum and maximum ranges would provide enough days between refueling given the percentage of time that the Coast Guard normally operates at certain speeds. By developing a measurable range requirement, the Coast Guard helped to promote a clear understanding of Offshore Patrol Cutter performance by potential shipbuilders and sought to balance the cost of additional range with the value that it provides. Furthermore, officials at the independent test authority—the Navy’s Commander Operational Test and Evaluation Force—told us that they have been actively involved through the requirements development process and many of their questions regarding testability have been resolved. Two other key performance parameters—seakeeping and interoperability—are not as consistent with the Coast Guard’s guidelines of measurability and testability as identified in the Major Systems Acquisition Manual. For example the seakeeping key performance parameter described in the requirements document states that the Offshore Patrol Cutter shall be able to launch small boats and helicopters in 8.2- to 13.1-foot waves. However, in the specifications document, which is used to translate the requirements document into a level of detail from which contractors can develop a reasonably priced proposal, the Coast Guard states that the Offshore Patrol Cutter shall be able to launch small boats and helicopters in no more than 10.7 foot waves while transiting in a direction that minimizes the pitch and roll of the vessel—an important detail not specified in the requirements document. Further, the interoperability key performance parameter states that the Coast Guard must be able to exchange voice, video, and data with the Department of Defense and Homeland Security agencies. However, it does not list specific external partners or substantial details regarding the systems required to exchange data and the types and size of these data that could be examples of measurability and testability. This key performance parameter does not make this distinction between parts of the military that the Coast Guard operates with most often, such as the U.S. Navy and the intelligence community, and simply requires interoperability with all of DOD. Similarly, the interoperability key performance parameter does not specify the DHS agencies for which the Coast Guard must exchange data with, which makes this parameter difficult to test. Coast Guard’s independent testing officials agreed that this key performance parameter, as currently written, is not testable in a meaningful way and stated that there are ongoing efforts to improve the clarity of this requirement. During the requirements development process for the Offshore Patrol Cutter, the Coast Guard also made some decisions with respect to affordability. The following are examples where the Coast Guard made capability trades that are expected to help lower the program’s acquisition cost: Speed—after a series of analyses, the Coast Guard decided to reduce the minimum acceptable speed from 25 to 22 knots thereby, according to officials, potentially eliminating the need for two diesel engines. According to a study completed by the Coast Guard, this trade could reduce the acquisition cost of each cutter by $10 million. Stern Launch—the Coast Guard removed the stern launch ramp capability from the Offshore Patrol Cutter design. While this trade-off may inhibit the launch and recovery of small boats in certain conditions, such as substantial roll or side-to-side movement of the vessel, Coast Guard officials stated that it will reduce the cost of the cutter because a stern launch ramp requires the cutter to be heavier, thus adding cost. C4ISR—the Coast Guard eliminated a minimum requirement for an integrated C4ISR system and instead is requiring a system built with interfaces to communicate between different software programs. According to Coast Guard officials, the Coast Guard now plans to use a Coast Guard-developed software system—Seawatch—rather than the more costly lead systems integrator-developed software system currently installed on the National Security Cutter, even though this system does not provide the Coast Guard with the capability to exchange near real-time battle data with DOD assets. The improvements and affordability decisions that the Coast Guard has made in its requirements development process for the Offshore Patrol Cutter are even more evident when compared with the process for generating requirements for its other major cutter—the National Security Cutter. Due to the nature of the lead systems integrator strategy that the Coast Guard initially used to buy the National Security Cutter, Integrated Coast Guard Systems developed the requirements, designed, and began producing the National Security Cutter before the requirements document was completed. The Coast Guard did not have an operational requirements document at the time the Coast Guard awarded the construction contract for the first cutter in 2004, but the Coast Guard documented the requirements in 2006. Further, even as the third National Security Cutter was in production, Coast Guard was refining the requirements and, in January 2010, made the decision to clarify some key performance parameters such as anti-terrorism/force protection and underwater mine detection because the existing requirements were not testable. To further remedy the lack of clear requirements, Coast Guard officials stated that they are currently developing a second version of the requirements document that improves the specificity and definition of many of the National Security Cutter’s requirements and will be used as criteria during operational testing. To date, the Coast Guard has not reduced the National Security Cutter’s capability for the purpose of affordability as it has done for the Offshore Patrol Cutter. However, according to Coast Guard officials, there is a revised acquisition program baseline under review which will reflect an ongoing effort to lower the acquisition cost of the vessel. The requirements and missions for the National Security Cutter and the Offshore Patrol Cutter programs have similarities, but the actual cost for one National Security Cutter compared to the estimated cost of one Offshore Patrol Cutter varies greatly. Even though the Coast Guard took steps to consider affordability while developing the requirements for the Offshore Patrol Cutter, those affordability decisions do not explain the magnitude in the difference between these two costs. Table 3 compares the expected performance of a National Security Cutter with the objective/threshold requirements of an Offshore Patrol Cutter, the missions each cutter is expected to perform, and the actual/estimated costs for each cutter. This comparison raises questions whether the Offshore Patrol Cutter could be a less expensive, viable substitute for the National Security Cutter or whether there are assumptions built into the Offshore Patrol Cutter cost estimate, not related to requirements, which are driving the estimated costs down. With respect to the first, DHS, motivated by concerns about the affordability of the National Security Cutter program, completed a Cutter Study in August 2011 which included an analysis to examine the feasibility of varying the combination of objective—or optimal performing—Offshore Patrol Cutters and National Security Cutters in the program of record. Through this analysis, DHS found that defense operations is a key factor in determining the quantity of National Security Cutters needed and that the Coast Guard only needs 3.5 National Security Cutters per year to fully satisfy the planned requirement for defense-related missions. DHS concluded that with six National Security Cutters the Coast Guard can meet its goals for defense operations and mitigate some of the near-term capacity loss of the five National Security Cutter fleet modeled in the Cutter Study. DHS Program Analysis and Evaluation officials stated that this, in conjunction with other information, helped to inform the decision to not include the last two National Security Cutter hulls—hulls 7 and 8—in the fiscal years 2013-2017 capital investment plan. However, the DHS Cutter Study also notes that the time line for the two acquisitions makes a trade-off between the National Security Cutter and the Offshore Patrol Cutter difficult since the National Security Cutter program is in production whereas the Offshore Patrol Cutter program is only in the design phase. Similarly, we have reported that the Coast Guard may face an operational gap in its ability to perform missions using major cutters due to the condition of the legacy fleet. With respect to the second possibility that there are assumptions built into the Offshore Patrol Cutter cost estimate that are driving the estimated costs down, the Coast Guard included three key assumptions in the Offshore Patrol Cutter’s life cycle cost estimate, generally not related to the cutter’s key requirements, which lower the estimated cost in comparison to the actual cost of the National Security Cutter. These three assumptions are: Learning Curve. The Coast Guard assumes that the shipyard(s) will generally continue to reduce the labor hours required to build the Offshore Patrol Cutter through the production of all 25 vessels. This may prove optimistic, particularly for later ships in the class, because the amount of additional learning per vessel–or efficiencies gained during production due to improving the manufacturing process to build the ship in a way that requires fewer labor hours–typically decreases over time in a shipbuilding program. Military versus Commercial Standards. The life cycle cost estimate assumes that certain areas of the Offshore Patrol Cutter’s construction and material would reflect an average of 55 percent commercial standards—or construction standards that are typically used for military sealift ships that provide ocean transportation—and 45 percent military standards—or construction standards typically used for Navy combat vessels. Any changes in this assumption could have a significant effect on the cost estimate because military standards require more sophisticated construction applications, particularly in the areas of shock hardening and signature reduction, to prepare a ship to survive battle. Such sensitivity could help to explain the difference in costs between the Offshore Patrol Cutter program and the National Security Cutter program and officials stated that the latter program is being built to about 90 percent military standards. Production Schedule. The cost estimate reflects the Coast Guard’s plan to switch from building one Offshore Patrol Cutter per year to building two Offshore Patrol Cutters per year beginning with the fourth and fifth vessel in the class. If the Coast Guard cannot achieve or maintain this build rate due to budget constraints, it may choose to stretch the schedule for the program which in turn could increase costs. Coast Guard program officials generally agreed that these three variables are important to the cost of the Offshore Patrol Cutter and are key reasons why the Coast Guard expects one Offshore Patrol Cutter to cost less than half of one National Security Cutter. However, these officials recognized that the cost estimate for the Offshore Patrol Cutter is still uncertain since the cutter has yet to be designed—thus, the National Security Cutter’s actual costs are more reliable. Coast Guard program officials also added that the cost estimate for the Offshore Patrol Cutter is optimistic in that it assumes that the cutter will be built in accordance with the current acquisition strategy and planned schedule. They noted that any delays, design issues, or contract oversight problems—all of which were experienced during the purchase of the National Security Cutter— could increase the eventual price of the Offshore Patrol Cutter. According to the April 2012 acquisition decision memorandum, which documents DHS’s approval for the Coast Guard to move forward and award design contracts for the Offshore Patrol Cutter, DHS Office of Policy and the Office of the Chief Financial Officer raised concerns about the potential for cost growth and this program crowding out other Coast Guard programs in future austere budget years. In response to concerns about affordability, DHS is requiring the Coast Guard to return for a special program review—one that is not required by acquisition guidance—before it awards a production contract, which is currently planned for fiscal year 2016. DHS Program Accountability and Risk Management officials told us that a new life cycle cost estimate is not required if the Coast Guard can demonstrate during this meeting that the acquisition cost and schedule in the approved acquisition program baseline are still valid. However, if there is a significant difference from the currently approved life cycle cost estimate, DHS would direct the Coast Guard at that time to update the life cycle cost estimate. The Coast Guard has established an acquisition governance framework that includes the following teams: Executive Oversight Council, Systems Integration Team, and Resource Councils. The Coast Guard is currently working on an update to its Major Systems Acquisition Manual that will articulate expectations for how these groups will interact. We found that the highest level team, the Executive Oversight Council—a group of admirals and senior executives—has actively conducted oversight meetings to govern the acquisition process for major acquisitions in the Coast Guard’s portfolio. However, these meetings were focused on individual programs and the Council has not acted upon some information presented to it that could help to manage the portfolio as a whole. Coast Guard officials told us that portfolio affordability decisions are handled through the budget process. However, this approach results in year to year adjustments to individual programs that do not optimize the long- term value of the portfolio. The Coast Guard has established a governance framework to provide leadership for the Coast Guard’s acquisition enterprise that includes the following teams: the Executive Oversight Council, Systems Integration Team, and Resource Councils. All of these teams have cross-directorate representation including members from the acquisitions, resources, and requirements directorates. These members are generally senior leaders including admirals, captains, and civilian executives. Each group has a charter to identify their purpose and scope of responsibilities. Table 4 provides an overview of each team according to their charters. The Coast Guard is currently updating its Major Systems Acquisition Manual to document how these teams will interact within this established framework. The previous version of the manual highlights the Executive Oversight Council as a review board that supports a knowledge-based acquisition management approach, but does not include any references to the Systems Integration Team or the Resource Councils. Based on draft language of the update to the manual, the Systems Integration Team and Resource Councils will serve as senior level advisors to the Executive Oversight Council. Each of the Resource Councils will report directly to the Executive Oversight Council for issues within their own domain—cutter, aviation, or C4ISR—and report to the Systems Integration Team for issues that cross domains. The Systems Integration Team will be responsible for coordinating the resolution of these issues raised by the Resource Councils as well as providing coordinated recommendations to the Executive Oversight Council. In addition, the Systems Integration Team will meet quarterly to review Resource Council meeting minutes to help ensure issues that affect more than one council are being appropriately recognized. Although Coast Guard officials stated the way in which teams are expected to interact with one another is still formalizing, we found that the following examples illustrate that the Executive Oversight Council oversees the acquisition governance framework and is well-positioned to delegate tasks to the other teams or pull information from them as needed to assist in the management of acquisitions or solve problems related to acquisitions: At a June 2011 Executive Oversight Council meeting to discuss the Patrol Boat and Medium Endurance Cutter Sustainment programs, the Council tasked the Cutter Resource Council to provide recommendations for unobligated Patrol Boat project funds. At an August 2011 Executive Oversight Council meeting to discuss the Coast Guard’s acceptance of the third National Security Cutter, the issue of the operational usefulness of the ship’s side door was raised. Officials suggested that the Cutter Resource Council may have a role in this discussion from an engineering perspective. According to officials, in Fall 2011, the Executive Oversight Council tasked the Systems Integration Team to assist in producing a strategy for sharing unclassified aviation imagery collected on classified systems so that it can be available for use throughout the Coast Guard. This is a cross-domain issue that was initially raised by the Aviation program office and involved the C4ISR and aviation stakeholders, among others. Coast Guard officials told us that a recommendation is currently in draft form. A February 2012 memo documents Executive Oversight Council approval of the C4ISR Resource Council’s recommendations to clarify requirements in the Offshore Patrol Cutter’s requirements document. The Executive Oversight Council has been active in meeting with individual programs to discuss the current status of the acquisition or particular issues, review key program documents, and help prepare program managers in advance of briefing more senior Coast Guard and DHS officials. According to Coast Guard documentation we reviewed, in 2010 and 2011, the Executive Oversight Council met 38 times with individual program managers to discuss major acquisitions. The Council conducted its meetings on a program by program basis and did not meet to discuss issues across the portfolio. The results of these meetings generally led to the council members taking one of four actions: elevating issues and/or making a recommendation to the Deputy requesting follow-up information or another meeting, Commandant for Mission Support, Deputy Commandant for Operations, and/or Vice Commandant, making an acquisition management decision, or determining no further action is necessary as the meeting was primarily for informational purposes. Table 5 provides some examples of these meeting results: While the Executive Oversight Council is positioned to have direct access to complete information on the progress of all acquisition programs as it conducts acquisitions oversight with support from the Systems Integration Team and Resource Councils, it has not acted on some information presented that could help the Coast Guard manage its portfolio as whole. Our best practices work has found that successful commercial companies assess product investments collectively from an enterprise level, rather than as independent and unrelated initiatives, and prioritize investments by integrating the requirements, acquisition, and budget processes. This approach empowers leadership to make decisions about the best way to invest resources and holds managers accountable for outcomes. Organizations should use an integrated approach to prioritize needs and allocate resources in accordance with strategic goals, so they can avoid pursuing more products than they can afford and optimize return on investment. Appendix II provides additional details about four key portfolio management practices including: clearly define and empower leadership; establish standard assessment criteria, and demonstrate comprehensive knowledge of the portfolio; prioritize investments by integrating the requirements, acquisition, and budget processes; and continually make go/no-go decision to rebalance the portfolio. These best practices suggest that one potential positive of the Deepwater program as envisioned was the prospect of making trades within the portfolio as opposed to trying to manage and optimize each program individually. As we reported in April 2011, Coast Guard officials told us that as it began assuming the system integrator function from the Deepwater contractor in 2007, it believed it needed a forum to make trade-offs and other program decisions especially in a constrained budget environment and established We did identify instances in which the the Executive Oversight Council.Executive Oversight Council was presented with opportunities to manage its acquisitions as a portfolio, but tasks were not completed or no action was taken: At the request of the Executive Oversight Council, in September 2010, the Systems Integration Team briefed the Council on strategic courses of action to revise acquisition program baselines under a budget constraint, but officials from the Systems Integration Team stated that the briefing led to no decisions or further taskings.Guard officials stated that the briefing was also given to the Deputy Commandant for Mission Support and the Deputy Commandant for Operations. Coast The Acquisition Directorate’s October 2010 Blueprint for Continuous Improvement included action items for the Executive Oversight Council to establish, document and approve project priority review time lines as well as publish project priority guidance to support a larger goal of developing and implementing effective and efficient decisonmaking to maximize results and manage risk within resource constraints. The planned completion dates for these activities was the end of fiscal year 2011, but these action items have not yet been completed. Officials responsible for developing the Blueprint explained that the action items and associated completion dates may have been optimistic given the amount of cross-directorate collaboration required. In May 2011 the Executive Oversight Council received a briefing on Fleet Mix Analysis Phase 2, but no decisions or recommendations based on this analysis were made. Coast Guard officials stated that the briefing was also given to the Deputy Commandant for Mission Support and the Deputy Commandant for Operations. A senior Coast Guard official who is the point of contact for the Council stated that the council’s responsibility was to be informed of the matter but does not have a decision authority. We also found no discussion of DHS’s Cutter Study—which includes scenarios that could affect the Coast Guard’s surface fleet—through our review of meeting minutes from 2010 and 2011. While the Executive Oversight Council has had opportunities to discuss affordability of the entire portfolio and make informed trade-off decisions, Coast Guard officials told us that all of these decisions are handled through the annual budget process, which also takes into account budgeting for operating expenses. However, the Coast Guard’s current approach of relying on the budget process to manage the affordability of its portfolio has proven ineffective. The preparation of the annual budget request involves immediate trade-offs, but does not provide the best environment to make decisions to develop a balanced, long-term portfolio. As we have previously reported, given that the Coast Guard is managing more programs than its budget can support, and it does not review its portfolio outside of the annual budget process, the Coast Guard has relied on budget decisions each year to drive the acquisitions process.each year as opposed to having a reliable funding profile consistent with As a result, program managers react to the budget request their approved baselines by which to execute their programs. One of the responsibilities in the Executive Oversight Council’s charter is to synchronize projects with planning, programming, budgeting, and execution milestones to align them for successful completion of key milestones, but Coast Guard officials acknowledged that this alignment has not yet occurred. The Coast Guard has made progress in improving its acquisition management capabilities. Yet the Coast Guard continues to manage a portfolio of acquisitions that lacks up-to-date, DHS-approved baselines to reflect current costs and schedules and that will likely cost significantly more than originally planned. While its portfolio requires more funding on an annual basis than its expected budget can support, the Coast Guard has not yet fully implemented our recommendation from July 2011 to adopt action items to promote stability in the capital investment plan, ensure program baselines are aligned with the capital investment plan, and establish project priorities as a Coast Guard-wide goal. In the absence of up-to-date program baselines, the Coast Guard makes decisions about which programs to fund and which programs not to fund as part of its annual budget process as opposed to having a stable and meaningful long-term capital investment plan based on identified needs. This puts Congress and the taxpayer in the position of having to commit resources to individual programs without knowing whether they are affordable, or achievable, within the context of the overall portfolio. Furthermore, unplanned demands for additional funds are likely as the Coast Guard begins to start new acquisition programs. If the Coast Guard continues to make expedient decisions in the near-term environment of budget decisions without an effective means of portfolio management, there is no way to help ensure that near-term budget decisions are optimized and in the best interest of the Coast Guard’s acquisition portfolio in the long term. The Coast Guard has made improvements in its process to develop requirements for the Offshore Patrol Cutter in response to concerns about affordability, but has not reassessed the mix of assets in its portfolio for the same purpose. The Coast Guard may not be on track to acquire many of the capabilities identified as necessary after September 11, 2001, while stating that those mission needs are still guiding the ongoing acquisitions. It is unclear, given the Coast Guard’s decisions not to pursue some of these capabilities, whether it will obtain a balanced mix of assets and the presence-based operating concept called for in its 2005 Mission Need Statement. Furthermore, the Coast Guard remains committed to purchasing its major cutter program of record even though the requirements of the two cutters have similarities, yet have very different expected costs. It is too early to know what the Offshore Patrol Cutter will eventually cost, but the current estimate includes some assumptions that may help explain the differences in the estimated cost of the Offshore Patrol Cutter when compared to the National Security Cutter. The Coast Guard’s initiative to establish an acquisition governance board—the Executive Oversight Council—provides an opportunity for it to strengthen portfolio management practices that we found contribute to the success of commercial companies. For example, given its cross- directorate representation and direct access to complete information on all acquisition programs—with support from the Systems Integration Team and Resource Councils—the Council has the potential to implement key portfolio management practices such as prioritizing investments by integrating the requirements, acquisition, and budget processes. But the Council has not engaged in these portfolio-wide reviews, and instead, the Coast Guard continues to manage its acquisitions through the budget process. Until the Executive Oversight Council begins to use the individual program information it receives to manage its portfolio of acquisitions—including informing strategic trade- off decisions—the Coast Guard will continue to operate in an environment where its needs are not balanced with available resources. To help the Coast Guard create stability in the acquisition process and provide decision makers, including DHS, Office of Management and Budget, and Congress, with current information to make decisions about budgets, we recommend that the Commandant of the Coast Guard conduct a comprehensive portfolio review to develop revised baselines that reflect acquisition priorities as well as realistic funding scenarios. To strengthen the Coast Guard’s acquisition governance framework and better prepare the Coast Guard in a constrained fiscal environment, we recommend that the Commandant of the Coast Guard identify the Executive Oversight Council as the governing body to oversee the Coast Guard’s acquisition enterprise with a portfolio management approach. The Executive Oversight Council should supplement individual program reviews with portfolio-wide reviews to make performance and affordability trade-off decisions that will help ensure the Coast Guard is acquiring a balanced portfolio to meet mission needs, given the Coast Guard is not currently on a path to achieve several capabilities identified in the 2005 Mission Need Statement. We provided a draft of this report to DHS and the Coast Guard for comment. In its written comments, DHS concurred with both recommendations. The written comments are reprinted in appendix III. With respect to the first recommendation, that the Coast Guard conduct a comprehensive portfolio review to develop revised baselines that reflect acquisition priorities as well as realistic funding scenarios, DHS agreed and stated the Coast Guard will conduct a portfolio-wide review following submittal of the next President’s budget request. Furthermore, DHS stated that the Coast Guard is committed to ensuring acquisition plans are executable in the current fiscal climate and noted that the Coast Guard is currently revising its acquisition program baselines and several new baselines are in the approval process. However, DHS added that funding has varied considerably over the last several years making it extraordinarily difficult to predict future budget authority with precision and, as a result, it is inevitable that trade-off decisions will need to be made on an annual basis. We understand that the budget process is a dynamic environment in which some trade-off decisions may have to be made on an annual basis, but we believe that the Coast Guard should develop revised baselines that reflect acquisition priorities as well as realistic funding scenarios to minimize the magnitude of trade-offs needed each year resulting from the current mismatch between resources needed to support all approved program baselines and expected funding levels. Without such long-term priorities, program managers will likely always be at a disadvantage of having to continuously update baselines to react to the Coast Guard’s budget planning as opposed to having a stable budget profile reflecting the baselines. In concurring with our second recommendation, DHS stated that the Coast Guard will identify the Executive Oversight Council as the governing body to oversee the Coast Guard’s acquisition enterprise with a portfolio management approach. The Coast Guard also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In conducting this review, we relied in part on the information and analysis in our past work, including reports completed in 2010, 2011, and 2012.Additional scope and methodology information on each objective of this report follows. To assess the planned cost and schedule of the Coast Guard’s major acquisitions portfolio, we reviewed each asset’s original acquisition program baseline and revised baseline, if an approved, revised baseline was available. To determine whether these baselines reflected the current status of the program, we reviewed breach notifications, the fiscal year 2013 President’s Budget request, and interviewed officials from program offices. We also reviewed the Coast Guard’s Major Systems Acquisition Manual to identify when programs are required to update baselines. In comparing original costs to revised baseline costs, if a revised baseline presents both threshold costs and objective costs, threshold costs were used. For those programs that comprised the former Deepwater program this methodology allows traceability to the original $24.2 billion Deepwater baseline while also showing how much programs could now cost based upon revised baselines. Furthermore, some programs have reported a cost breach to the revised baseline and costs are expected to increase beyond the threshold values. In making this comparison for those programs with no planned funding beyond fiscal year 2014, the estimated total program cost equals dollars appropriated to date plus planned funding in the fiscal years 2013-2017 capital investment plan. Further, we analyzed the Coast Guard’s fiscal years 2013-2017 expanded capital investment plan to identify the planned annual funding levels for each major acquisition program. We then compared those planned funding levels to the annual funding needs identified in the program’s life cycle cost estimate to determine whether there was a match. If an approved life cycle cost estimate was not available, we used the annual funding needs identified by the Coast Guard in the expanded capital investment plan. We also interviewed Coast Guard officials from the acquisitions directorate and resources directorate to discuss future funding plans as well as to discuss the Coast Guard’s plans for the National Security Cutter program to determine how those plans could affect other programs. We also interviewed officials from the Department of Homeland Security (DHS) Program Accountability and Risk Management and DHS Office of Policy to discuss their oversight responsibilities for Coast Guard programs. To assess the steps the Coast Guard has recently taken to develop an affordable portfolio through its requirements process, we obtained and analyzed Fleet Mix Analysis Phase One, Fleet Mix Analysis Phase Two, and the DHS Cutter Study. We also relied on our past work that reviewed Coast Guard appropriations from fiscal years 2008 through 2012 and the President’s budget request for fiscal year 2013 to analyze how fiscal assumptions in the studies compared with past appropriations. Further, we examined the 2005 Mission Need Statement to determine the extent to which the capabilities being acquired matched the needs set forth in this plan. In doing so, we traced 11 system performance capabilities identified in the 2005 Mission Need Statement through various program documents, including the 2007 Deepwater acquisition program baseline, operational requirements documents, and testing documents to identify which capabilities the Coast Guard is currently planning to acquire. In addition to reviewing fleetwide requirements, we also reviewed the requirements development process for the National Security Cutter and the Offshore Patrol Cutter. We focused on these two assets as they are the two largest cost drivers in the Coast Guard’s major acquisition portfolio. To examine the Offshore Patrol Cutter’s requirements development process, we reviewed the Coast Guard’s Major Systems Acquisition Manual and Requirements Guidance and interviewed officials in the capabilities directorate to discuss the process and to identify key documents and studies that guided this process. We also compared the National Security Cutter’s and Offshore Patrol Cutter’s missions, requirements, and costs to determine similarities and differences. We used Coast Guard budget documentation to determine the cost of the fifth National Security Cutter and then used the Offshore Patrol Cutter’s life cycle cost estimate which identified the average cost of the fourth and fifth Offshore Patrol Cutters. We discussed the comparison between the National Security Cutter and Offshore Patrol Cutter with DHS and Coast Guard officials. To assess the extent to which Coast Guard is using cross-directorate teams to provide oversight and inform acquisition decisions, we interviewed officials from the acquisition and resource directorates to identify what teams the Coast Guard has established as part of an acquisition governance framework. We also reviewed the charters for each of those teams. We then collected and analyzed meeting minutes and briefing presentations for the Executive Oversight Council and Resource Councils from calendar years 2010-2011, but we did not do the same for the Systems Integration Team because it was just forming during this time period. We also reviewed the acquisition directorate’s Blueprint to identify what action items had been tasked to these teams. We interviewed senior representatives from the Executive Oversight Council, Systems Integration Team, and chairs of the Aviation, Cutter and C4ISR Resource Councils to understand their specific roles and responsibilities for managing acquisition programs and informing recapitalization decisions. We also interviewed stakeholders from the acquisitions and resources directorates to gather their understanding of the roles of the Executive Oversight Council, Systems Integration Team and Resource Councils, and the nature and extent of their interaction with these groups. Furthermore, we referred to previous GAO work on best practices for portfolio management to identify the extent to which the Coast Guard’s framework implements this management approach. To support our review, we requested information and documents pertaining to the current cost estimates and schedules for each asset in the Coast Guard’s major acquisitions portfolio, a copy of the DHS- directed briefing in which Coast Guard was to develop a plan for showing program tradeoffs, and several sets of Executive Oversight Council meeting minutes. The Coast Guard did not provide us current cost estimates and schedules, the complete DHS-directed briefing, or all sets of meeting minutes because officials stated these documents included budget negotiation information. We conducted this performance audit from November 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following list identifies several key practices that can improve outcomes when managing a portfolio of multiple programs. Those responsible for product investment decisions and oversight should be clearly identified and held accountable for outcomes. Portfolio managers should be empowered to make decisions about the best way to invest resources. Portfolio managers should be supported with cross-functional teams composed of representatives from key functional areas. Specific criteria should be used to ensure transparency and comparability across alternatives. Investments should be ranked and selected using a disciplined process to assess the costs, benefits, and risks of alternative products. Knowledge should encompass the entire portfolio, including needs, gaps, and how to best meet the gaps. Requirements, acquisition, and budget processes should be connected to promote stability and accountability. Organizations should use an integrated approach to prioritize needs and allocate resources, so they can avoid pursuing more products than they can afford, and optimize return on investment. Resource allocation across the portfolio should align with strategic goals/objectives, and investment review policy should use long-range planning. Program requirements should be reviewed annually to make recommendations on proposed changes/descoping options. As potential new products are identified, portfolios should be rebalanced based on those that add the most value. If project estimates breach established thresholds, the product should be immediately reassessed within the context of the portfolio to determine whether it is still relevant and affordable. Agencies should use information gathered from post-implementation reviews of investments, as well as information learned from other organizations, to fine-tune the investment process and the portfolios to shape strategic outcomes. Defense Acquisitions: Assessments of Selected Weapon Programs, GAO-10-388SP (Washington, D.C.: March 30, 2010) Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight, GAO-09-29 (Washington, D.C.: November 18, 2008) Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes, GAO-07-388 (Washington, D.C.: March 30, 2007) Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, GAO-04-394G (Washington, D.C.: March 2004) Executive Guide: Leading Practices in Capital Decision-Making, GAO/AIMD-99-32 (Washington, D.C.: December 1998) John P. Hutton, (202) 512-4841 or huttonj@gao.gov. In addition to the contact named above, individuals making key contributions to this report include Katherine Trimble, Assistant Director; Molly Traci; Jose Cardenas; Mya Dinh; Laurier Fish; Laura Greifner; Kristine Hassinger; and Andrea Yohe. Coast Guard: Legacy Vessels’ Declining Conditions Reinforce Need for More Realistic Operational Targets. GAO-12-741. Washington, D.C.: July 31, 2012. Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies. GAO-12-751R. Washington, D.C.: May 31, 2012. Coast Guard: Observations on Arctic Requirements, Icebreakers, and Coordination with Stakeholders. GAO-12-254T. Washington, D.C.: December 1, 2011. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-12-101T. Washington, D.C.: October 4, 2011. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-11-743. Washington, D.C.: July 28, 2011. Coast Guard: Observations on Acquisition Management and Efforts to Reassess the Deepwater Program. GAO-11-535T. Washington, D.C.: April 13, 2011. Coast Guard: Opportunities Exist to Further Improve Acquisition Management Capabilities. GAO-11-480. Washington, D.C.: April 13, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges. GAO-10-411T. Washington, D.C.: February 25, 2010. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets. GAO-09-497. Washington, D.C.: July 17, 2009. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Strategies for Mitigating the Loss of Patrol Boats Are Achieving Results in the Near Term, but They Come at a Cost and Longer Term Sustainability Is Unknown. GAO-08-660. Washington, DC: Jun 23, 2008. Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008.
The Coast Guard is in the process of acquiring a multi-billion dollar portfolio of systems intended to conduct missions that range from marine safety to defense readiness. GAO has reported extensively on the Coast Guard's significant acquisition challenges, including those of its former Deepwater program, as well as areas in which it has strengthened its acquisition management capabilities. For this report, GAO assessed (1) the planned cost and schedule of the Coast Guard's portfolio of major acquisitions; (2) the steps the Coast Guard has recently taken to develop an affordable portfolio through its requirements process; and (3) the extent to which the Coast Guard is using cross-directorate teams to provide oversight and inform acquisition decisions. To conduct this work, GAO reviewed the Coast Guard's Major Systems Acquisition Manual, acquisition program baselines, capital investment plans, fleet mix analyses, and cross-directorate teams' charters and meeting documentation, and interviewed relevant Coast Guard and DHS officials. The planned cost and schedule of the Coast Guard's portfolio of major acquisitions is unknown because of outdated acquisition program baselines and uncertainty surrounding affordability. The Coast Guard's approved baselines, which reflect cost and schedule estimates, indicate the estimated total acquisition cost of Coast Guard major acquisitions could be as much as $35.3 billion--an increase of approximately 41 percent over the original baselines. However, the approved baselines for 10 of 16 programs do not reflect current cost and schedule plans because programs have breached the cost or schedule estimates in those baselines, changed in scope, or do not expect to receive funding to execute baselines as planned. Furthermore, a continued mismatch between resources needed to support all approved baselines and expected funding levels has required the Coast Guard to make decisions about which programs to fund and which programs not to fund as part of its annual budget process. Both DHS and the Coast Guard have acknowledged this resource challenge, but efforts to address this challenge have not yet resulted in a clear strategy for moving forward. The Coast Guard has taken steps through its requirements process--a process that takes mission needs and converts them to specific capabilities--to address affordability, but additional efforts are required. For example, in an effort to consider affordability, the Coast Guard made some capability trade-offs when developing requirements for its largest acquisition, the Offshore Patrol Cutter. But whether the cutter ultimately will be affordable depends on some key assumptions in the cost estimate that are subject to change. At the fleet level, the Coast Guard completed two efforts to reassess what mix of assets it requires to meet mission needs, but neither effort used realistic fiscal constraints or considered reducing the number of assets being pursued. The mix of assets the Coast Guard is acquiring is based upon needs identified in 2005, but the Coast Guard may not be on a path to meet these needs and it has not re-examined the portfolio in light of affordability. The Coast Guard has established an acquisition governance framework that includes the following cross-directorate teams: the Executive Oversight Council, the Systems Integration Team, and Resource Councils. The Executive Oversight Council--composed of admirals and senior executives--is well-positioned to delegate tasks to the other teams or obtain information as needed to assist in managing acquisitions. This Council has been active in meeting to discuss individual acquisitions; however, it has not met to discuss the portfolio as a whole. Coast Guard officials told us it manages portfolio affordability through the budget process. GAO's best practices work has found that successful commercial companies assess product investments collectively from an enterprise level, rather than as independent and unrelated initiatives. The Coast Guard's current approach of relying on the annual budget process to manage portfolio affordability involves immediate trade-offs but does not provide the best environment to make decisions to develop a balanced long-term portfolio. GAO recommends that the Commandant of the Coast Guard conduct a comprehensive portfolio review to develop revised acquisition program baselines and identify the Executive Oversight Council as the governing body to oversee acquisitions with a portfolio management approach to help ensure the Coast Guard acquires a balanced mix of assets. DHS concurred with both recommendations and noted planned actions to address the recommendations.
As shown in figure 1, INS has benefited from significant increases in its regular appropriations and appropriations from its fee accounts. Funding increases have continued in fiscal year 1999 with Congress providing over $3.9 billion. When funding from the Working Capital Fund, carryover balances, and certain reimbursements are added to this figure, INS’ operating budget totals approximately $4.0 billion for fiscal year 1999. INS divides its operating budget into four categories of spending: (1) mandatory expenses, e.g., rent; (2) personal salaries and benefits; (3) set- asides, such as employee relocations, vehicle acquisitions, and background investigations; and (4) discretionary funding. For purposes of this review, the first three categories can be grouped together as expenses that either have first claim on a budget because they must be paid or are considered integral to an agency’s operations. Although many of these expenses directly benefit field operations, most are centrally-funded at headquarters. The last category—discretionary funding—funds personnel costs for other- than-permanent employees; discretionary overtime; travel; cash awards; some types of procurements; and day-to-day operating expenses, such as equipment maintenance and lease of copiers. Table 1 shows data provided by INS on its end-of-year allocation for fiscal year 1998 compared with its current allocation for fiscal year 1999, by spending categories. To determine (1) INS’ overall fiscal condition, and (2) how factors such as overhiring and a decline in Examinations Fee applications have affected INS’ fiscal situation, we interviewed officials in INS’ Offices of Budget, Personnel, Facilities, and Field Operations. To get additional perspectives on INS’ funding status, we interviewed officials in DOJ’s Justice Management Division and OMB’s Justice and General Services Administration Branch. We reviewed INS budget documents prepared for fiscal year 1999 that were submitted to the Justice Department, OMB, and Congress, as well as those prepared for internal use, to document and analyze changes in funding. In addition, INS provided memorandums and briefing documents relevant to our work and additional supporting material prepared specifically for our review. Our work was performed in Washington, D.C., during February and March 1999, in accordance with generally accepted government auditing standards. Since 1996, INS has been making a concentrated effort to fill both its existing vacancies and many new positions authorized by Congress each year. However, throughout this period, attrition of staff already on-board and reported difficulties in hiring new staff have impeded INS from filling many positions. In an attempt to remedy this situation, INS allowed field offices to hire 4 percent more than their number of funded positions during fiscal year 1998. As discussed below, this policy, combined with other fiscal pressures, resulted in most INS programs having less discretionary funding in fiscal year 1999 than in fiscal year 1998. Between the end of fiscal years 1995 and 1998, INS’ on-board staff increased from 18,823 to 27,941. INS anticipates adding another 3,000 staff by the end of fiscal year 1999. However, according to INS officials, throughout this period, the number of staff on board generally lagged behind authorized levels. INS officials attribute the lag to (1) significant new authority to hire provided by Congress each year, (2) high rates of attrition of on-board staff throughout the year, and (3) difficulty in recruiting and retaining a group of qualified candidates from outside of INS to fill vacancies as they arise. Since 1996, INS has taken several steps to overcome these difficulties. First, to ensure that its workforce would expand rather than shift internally, INS directed field staff to hire for only entry level positions. Second, INS allowed field managers to select a larger pool of candidates to consider for employment than they were authorized to hire because it was anticipated that a number of candidates would (1) not make it through the pre-appointment process, or (2) no longer be available by the time INS could make an offer of employment. Third, with approval, field managers were permitted to hire 2 percent more than their number of funded positions. The over-hiring was supposed to occur in field offices where attrition or new hiring authority was anticipated. The over-hired positions were supposed to be used to fill vacancies as soon as they occurred so that field office hiring would not exceed funded levels for the year. At the start of fiscal year 1998, regional directors requested, and the Commissioner approved, an increase in the over-hire authority to 4 percent. During fiscal year 1998, the number of INS staff on board increased from 86 to nearly 97 percent of INS’ funded level. The large amount of fiscal year 1998 hiring created fiscal stress for the agency by increasing certain payroll costs beyond budgeted levels. According to INS officials, beginning in fiscal year 1998, there was a rapid acceleration in the on-board rate of Border Patrol agents, Investigators, and Detention and Deportation officers. These positions were over-hired for substantial periods during fiscal year 1998. This created a funding problem because INS allocated personal services and benefits (PS&B) for funded positions only--not over-hired ones. As of May of 1998, INS projected that the PS&B portion of one of its accounts—Salaries and Expenses—would have a deficit of $16.1 million by the end of the fiscal year. The Border Patrol program accounted for most of the projected deficit. The nine other accounts that also provide funding for PS&B were projected to have surpluses or have negligible deficits. INS officials attributed the deficit in part to previous and projected over- hiring by field offices. INS officials told us that some field offices would over-hire, but then not use the over-hired position to fill their vacancies. In some cases, they said this occurred because there was a mismatch between the positions that had been over-hired and the vacancies that occurred. They said another reason for the deficit was a miscoding of $2.5 million in obligations for newly hired personnel to the Salaries and Expenses account instead of the Violent Crime Reduction Trust Fund (VCRTF) account. In response to the anticipated deficit, in May 1998, the Office of Budget issued guidance to executive staff. The guidance said the over-hire policy was not intended to permit field offices to remain up to 4 percent over the authorized number of positions for extended periods of time. The guidance listed four actions to be taken: (1) correct miscoding of new hires from the Salaries and Expenses account to the VCRTF account; (2) ensure all new hires are coded to the correct account; (3) manage subsequent hiring to resolve over-hiring of officer positions; and (4) redirect, by the Office of Budget, $6.5 million to cover the remainder of the anticipated year-end PS&B deficit. The guidance warned that if hiring continued to exceed authorized levels, discretionary funds would have to be used to cover the projected deficit in PS&B funds. However, as of August 1998, the projected deficit of PS&B funds in the Salaries and Expenses account had increased to $20 million. To respond to this situation, according to budget officials, field staff were directed to reduce staff on board to funded levels. At the end of fiscal year 1998, however, certain enforcement positions were still over-hired. According to an INS budget official, the over-hired positions accounted for about $12 million in PS&B deficits. Approximately 50 percent of that amount was covered by unobligated discretionary funds that were reallocated by INS regions to PS&B. In the past, according to INS and Justice Department officials, PS&B funding that was not used to pay personnel costs was reallocated to help fund other spending. To successfully implement the policy of hiring up to funded levels during fiscal year 1998, INS had to commit a larger share of its budget to pay for personnel costs. This meant that a smaller share of funds would be available to address other needs. For example, to pay an $80 million settlement with the Investigation Union, INS has been paying in annual $10 million installments from its Investigations lapsed PS&B funds. As a result of the increased hiring in fiscal year 1998, the Investigations program reportedly did not have sufficient lapsed dollars to fund the $10 million installment. As a result, the Office of Budget set aside $10 million of Investigations funding at the beginning of fiscal year 1999 to pay the current year installment. This meant that the Investigations program received substantially fewer dollars for discretionary spending. To illustrate the impact of hiring up to funded levels on INS’ budget, if INS remained at the 86 percent on-board level that existed at the beginning of fiscal year 1998, then about $250 million in PS&B funds would have been available to spend on other needs. But, INS finished fiscal year 1998 with nearly 97 percent of its funded positions filled. If INS remained at the 97 percent on-board level throughout fiscal year 1999, it would have $60 million in PS&B funds after meeting payroll costs. This would be $190 million less than PS&B funds available with an 86 percent on-board level. After meeting payroll expenses, mandatory costs, and other expenses set aside for centrally-funded items that support service-wide needs, INS currently had about $71.8 million more in discretionary funds, overall, than it had in fiscal year 1998. Within INS, the Office of Field Operations, which distributes funding to field offices, had more discretionary funds than it had in fiscal year 1998, while all other headquarters offices received less discretionary funds. Although, overall, the Office of Field Operations received more discretionary funds than in fiscal year 1998, some programs within the Office of Field Operations received less. Table 2 provides a breakdown of how the 11 programs under the Office of Field Operations fared. Initially, in December 1998, when the Office of Budget communicated to the Office of Field Operations how much it would have available in discretionary funds, the total amount appeared to be $199 million less than was allocated in fiscal year 1998. However, according to INS officials, this amount did not yet include $270.7 million that was available from Examinations Fee funds, Salaries and Expenses funds for Adjudications and Naturalization program initiatives, and Working Capital funds. The amounts available from these funds had not yet been allocated because detailed spending plans needed to be developed first. Including the $270.7 million, the Office of Field Operations would have had $71.7 million more in discretionary funds, overall. In January 1999, the $270.7 million was allocated and, following feedback from the field about the inadequacy of the funds initially communicated, headquarters executive staff redirected $47.7 million to field operations for discretionary funds. These actions resulted in a total allocation that was $120.6 million more than in fiscal year 1998. Five of the 11 programs under the Office of Field Operations had less discretionary funds than in fiscal year 1998. According to a DOJ official, these problems were not communicated to Congress until January 22, 1999. According to INS Office of Budget officials, the potentially difficult fiscal situation for fiscal year 1999 was conveyed internally at meetings with (1) resource management staff in July 1998, (2) executive staff and regional directors in August 1998 during the third quarterly financial review, and (3) INS managers in October 1998 at the annual Commissioner’s conference. However, initial budget allocations were not made until December 11, 1998, nearly the end of the first quarter of fiscal year 1999. According to budget officials, the allocations were made in December because of the complicated nature of the appropriation. Office of Field Operations officials said they were surprised by the magnitude of the reductions in discretionary funds. INS continues to pursue the goal of hiring to its authorized level. However, as of January 6, 1999, the Executive Associate Commissioner for Field Operations cancelled the over-hire authority for all programs except, in certain circumstances, those funded by the Examinations Fee Account. In formulating its fiscal year 1999 budget, INS projected in November 1997 that it would receive 6.9 million Adjudications and Naturalization applications, and that these would produce $862 million in revenues for its Examinations Fee account. In July 1998, INS was projecting 5.6 million applications and $560 million in revenues for this account. INS overestimated the number of applications—in particular, the number of naturalization applications--that would be submitted to INS, and because of computer problems, it was not able to detect the downturn in applications in a timely fashion. In August 1998, DOJ submitted a reprogramming request for $171 million, of which $88 million was to help cover the decline in Examinations Fee revenues. accounts, and (4) the fiscal year 1999 reprogramming for naturalization initiatives that was submitted as a separate request in August. concerning the activities of immigrant groups. None of these sources anticipated the decline in applications and revenues that occurred in fiscal year 1998. A specific type of naturalization application, referred to as N-400 by INS, made up the single largest component, both in terms of the number of applications (estimated to be 21 percent in fiscal year 1999) and revenues generated (estimated to be 39 percent in fiscal year 1999), of the Examinations Fee account. INS projected in November 1997 that in fiscal year 1999, it would receive nearly 1.5 million N-400 applications, and that these would produce approximately $334 million in Examinations Fee revenue. In June 1998, INS lowered its fiscal year 1999 projections to 700,000 applications and $127 million in revenue. INS officials have developed some hypotheses, including the following, to explain the unanticipated drop in applications: Based on contacts with several community based organizations (CBOs), INS believed that CBOs were stockpiling naturalization applications in an effort to help eligible aliens meet a January 1998 deadline for filing certain types of adjustment of status applications. INS officials expected that naturalization applications would surge after the deadline. However, it turned out that CBOs were not stockpiling naturalization applications, and the expected surge did not occur. Legislative changes restored some benefits for aliens, reportedly causing a reduction in the demand for naturalization. Naturalization applications have peaked from among the 2.7 million aliens who were granted amnesty by the Immigration Reform and Control Act of 1986. However, evidence of this did not become clear until well into fiscal year 1998. INS had a large backlog of N-400 applications, perhaps creating a disincentive for applicants to apply for naturalization. INS did not have timely information to determine that the number of N-400 applications had begun to decline. The key reason for this was that computer programming errors were not detected and resolved for an 8- month period in fiscal year 1998. During this period, INS did not know how many N-400 applications were received. In December 1997, INS tried to change its naturalization case processing and tracking system, the Redesigned Naturalization Application Casework System (RNACS), to show the date that naturalization applications were received at INS, not the date that they began to be processed by INS adjudicators. However, when INS began to use RNACS with the applications receipt date incorporated into it, the system only recognized those applications that were received and processed in the same month. If the application was received in one month and processed in another month, the end-of-month summary report produced by INS’ Office of Information Resources Management did not capture the information on date of receipt. INS headquarters officials were reportedly skeptical of the low naturalization numbers derived from RNACS. However, it took several months for INS officials to determine that there was a problem with RNACS because (1) it generally takes 5 to 6 weeks for INS field offices to generate statistical information for headquarters which, in turn, is compiled and reported by headquarters’ Office of Statistics, and (2) INS headquarters officials were not certain whether the unexpectedly low numbers of naturalization applications represented real behavior or a reporting error. It then took several months to correct the computer problem and generate new reports. As a result, between October 1997 and May 1998, INS’ Examinations Fee Working Group did not have reliable data on which to base revised estimates of N-400 applications for fiscal years 1998 and 1999. We also examined whether and why INS’ rental payment to the General Services Administration (GSA) for fiscal year 1999 may exceed INS’ amount of appropriation identified for rent. We found that INS’ rental payment is expected to exceed the amount appropriated by $13.2 million. For several reasons, Justice officials said, it is difficult to accurately project rent costs, and the shortfall in INS’ funds for rent is not inconsistent with what it has incurred in prior years. As of March 1999, the anticipated GSA rental payment for INS for the current fiscal year is $160.1 million. This is $9.9 million above what was requested in the President’s Budget for rent and $13.2 million higher than the $146.9 million appropriated by Congress. Arriving at an accurate projection of rental payments is difficult for INS and other Justice components, according to Justice officials. INS’ GSA rental payment exceeded its appropriation by $15 million in fiscal year 1998, $9 million in fiscal year 1997, and $5 million in fiscal year 1996. According to INS and and Justice officials, year-to-year fluctuations in the accuracy of rent estimates could be caused by such factors as (1) the actual GSA rental payment for fiscal year 1999 being higher than that anticipated by INS at the time that it formulates its budget; (2) changes in INS programs after the end of the budget cycle (e.g., information on new projects requiring space become available after the budget cycle has ended); and (3) the difficulty of projecting requirements in an environment of high growth, such as that experienced by INS in recent years. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the fiscal year (FY) 2000 budget request for the Immigration and Naturalization Service (INS), focusing on: (1) INS' overall fiscal condition in FY 1999; and (2) how factors such as overhiring and a decline in Examinations Fee applications have affected INS' fiscal situation. GAO noted that: (1) after discussions with officials in INS, the Department of Justice, and the Office of Management and Budget, and based on GAO's analysis of INS budget documents, GAO concluded that INS is not experiencing an overall budget shortfall at this time; (2) the hiring policy that INS followed in FY 1998 in an attempt to meet congressional and administrative expectations resulted in INS having to commit a greater share of its FY 1999 budget to salaries and benefits than in prior years; (3) overall, however, INS has more discretionary funds than it had in FY 1998; (4) with respect to the Examinations Fee account, INS overestimated the number of applications it would receive and did not detect the consequent revenue shortfall for months because of computer programming errors; (5) when it became apparent that the anticipated revenues would not be realized, INS decided to seek reprogramming of funds from other accounts to cover the costs; (6) the overhiring and reduced Examinations Fee revenues contributed to most INS programs having less discretionary funding in FY 1999 than in FY 1998; and (7) although INS has not experienced an overall budget shortfall, the combination of higher personnel costs, declining Examinations Fee revenues, and the resultant need to reduce discretionary funding allocations to most programs has created fiscal stress for the agency.
One of the five government-wide initiatives in the 2001 President’s Management Agenda (PMA) was improved financial management, which targeted improper payments as an area with opportunities for improvement. This initiative called for the administration to establish a baseline on the extent of improper payments. In July 2001, as part of its efforts to advance the PMA initiative, OMB revised Circular No. A-11 by requiring 16 federal agencies to submit data on improper payments, including estimated improper payment rates. Section 831 of the National Defense Authorization Act for Fiscal Year 2002 included the provisions commonly referred to as the Recovery Auditing Act (RAA).required, among other things, that all executive branch agencies entering into contracts with a total value exceeding $500 million in a fiscal year The RAA have cost-effective programs for identifying errors in paying contractors and for recovering amounts erroneously paid. Fiscal year 2011 marked the eighth year of the implementation of IPIA as well as the first year of implementation of IPERA. IPIA required executive agencies to (1) identify programs and activities susceptible to improper payments (typically referred to as risk assessments), (2) estimate the amount of improper payments in susceptible programs and activities, and (3) report these improper payment estimates and actions taken to reduce them. Among other things, IPERA amended IPIA by changing the definition of programs susceptible to significant improper payments and adding minimum risk factors for agencies to consider in identifying such programs. In addition, IPERA generally repealed the RAA and included a new, broader requirement for agencies to conduct recovery audits, where cost effective, for each program and activity with at least $1 million in annual program outlays. This IPERA provision significantly lowered the threshold for required recovery audits and expanded the scope for recovery audits to all programs and activities, including grant and loan programs. IPERA also added new accountability provisions. For example, in its improper payments reporting, an agency is to describe how it ensures that agency managers, programs, and states and localities (where applicable) are held accountable for meeting applicable improper payment reduction targets as well as establishing and maintaining sufficient internal controls to prevent improper payments and promptly detect and recover those improper payments that are made. The following sections describe key provisions of IPIA, the RAA, and IPERA. Under IPIA, executive agencies were required to annually review all programs and activities that they administer and identify any that may be susceptible to significant improper payments. OMB, in its 2006 guidance,to apply to only those programs and activities where the risk level was unknown. For those programs deemed not risk susceptible, risk assessments were required every 3 years. In addition, the guidance defined “significant improper payments”—the threshold at which agencies must perform an estimate for a program—as annual improper payments in the program exceeding both 2.5 percent of program payments and $10 million. interpreted this IPIA requirement for annual risk assessments IPERA changed several of the requirements associated with risk assessments. Specifically, IPERA did the following: Amended IPIA to require agency heads to review agency programs and activities during the year following IPERA’s enactment and at least once every 3 fiscal years thereafter to identify those that may be susceptible to significant improper payments. Defined “significant” in the law for the purpose of determining a program’s susceptibility to significant improper payments. IPERA defined “significant improper payments” as gross annual improper payments (i.e., the total amount of overpayments plus underpayments) in the program that may have exceeded either (1) both 2.5 percent of program outlays and $10 million, or (2) $100 million (regardless of the improper payment percentage of total program outlays). Included the following minimum risk factors likely to contribute to a susceptibility to significant improper payments that agencies are to consider in performing risk assessments: (1) whether a program or activity is new to the agency; (2) the complexity of the program; (3) the volume of payments made through the program or activity; (4) whether payment decisions are made outside of the agency, such as by a state or local government; (5) recent major changes in program funding, authorities, practices, or procedures; (6) the level, experience, and quality of training for personnel responsible for making eligibility determinations or certifying that payments are accurate; and (7) significant deficiencies in the audit report of the agency or other relevant management findings that might hinder accurate payment certification. Under IPIA, for each program or activity identified as susceptible to significant improper payments, the head of each agency was to (1) estimate the annual amount of improper payments and (2) submit those estimates to Congress before March 31 of the following applicable year, with all agencies using the same method of reporting, as determined by the Director of OMB. IPERA revised the IPIA requirements for estimating improper payments by directing agency heads to produce statistically valid estimates of their agencies’ improper payments, or an estimate that is otherwise appropriate using a methodology approved by the Director of OMB, and to include the annual improper payment estimates in their performance and accountability reports or in the agency financial reports. Under IPIA, annual reporting requirements for reducing improper payments included a description of the steps the agency has taken to ensure that agency managers (including the agency head) are held accountable for reducing improper payments. IPERA amended IPIA’s requirements for reporting on corrective actions. Specifically, it required that agencies’ annual reporting include the estimated completion dates of planned corrective actions. Also, it required agencies to report on steps taken to ensure that states and localities, where applicable, are held accountable for reducing improper payments in the federal programs they implement. OMB’s implementing guidance for IPERA also states that for those agency programs not implemented directly by federal or state agencies or governments, agencies may also consider establishing these accountability mechanisms. OMB encouraged agencies to leverage new technologies and techniques to assist them in preventing and reducing improper payments. Agencies implementing long-term, ongoing corrective actions should annually review their existing corrective actions to determine if any existing action can be intensified or expanded, resulting in a high-impact, high return-on-investment in terms of reduced or prevented improper payments. Recovery audits were not required under IPIA, but were required under the RAA. Specifically, agencies were required to carry out a cost-effective program of recovery audits to identify and recover improper payments to contractors, if they entered into contracts with a total value that exceeded $500 million in a fiscal year. IPERA generally repealed the RAA, expanded the scope for recovery audits beyond commercial payments to include all programs and activities, and lowered the threshold of annual outlays requiring agencies to conduct recovery audits—from $500 million in annual agency contracting to $1 million in annual program expenditures. Specifically, under the recovery auditing provisions of IPERA, agencies are required to identify and recover improper payments by conducting recovery audits, also known as payment recapture audits, for agency programs that expend $1 million or more annually, if such audits would be cost effective. In its November 2010 guidance, OMB required agencies to submit payment recapture audit plans by January 14, 2011, that described the agencies’ current payment recapture efforts under authorities that pre- dated IPERA and their planned payment recapture efforts based on the new authorities provided by IPERA. IPIA required, with respect to any program or activity of an agency with estimated improper payments that exceeded $10 million, the head of the agency to provide along with the estimate a report on what actions the agency was taking to reduce the improper payments, including a discussion of the causes of the improper payments identified, actions taken to correct those causes, and results of the actions taken to address those causes; statement of whether the agency had the information systems and other infrastructure it needed in order to reduce improper payments to minimal cost-effective levels; description of the resources the agency had requested in its budget submission to obtain the necessary information systems and infrastructure if the agency did not have such systems and infrastructure; and description of the steps the agency had taken to ensure that agency managers (including the agency head) were held accountable for reducing improper payments. For RAA, OMB guidance required that agencies include in their annual reporting, among other things, a general description and evaluation of the steps taken to carry out a recovery auditing program, the total amount of contracts subject to review, the actual amount of contracts reviewed, the amounts identified for recovery, and the amounts actually recovered in a current year. Further, OMB Circular No. A-136 required agencies to report cumulative amounts identified for recovery and amounts actually recovered as a part of their current year reporting. IPERA requires the reporting of estimates without regard to thresholds. IPERA and OMB guidance require agencies to report, as part of their agency financial reports, certain information regarding the improper payment estimation process and efforts to recover improper payments. These requirements include, among other things, gross estimates of the annual amount of improper payments (i.e., overpayments plus underpayments) made in the program and a description of the methodology used to derive those estimates; discussion of the root causes of the improper payments identified, actions planned or taken to correct those causes, the planned or actual completion date of those actions, and the results of the actions taken; and discussion of the amount of actual improper payments that the agency expects to recover and how these payments will be recovered. According to OMB’s recovery auditing guidance under IPERA,must continue to report information on improper contract payments reviewed, identified, and recaptured, according to instructions contained in OMB Circulars No. A-123 and A-136. In addition, agencies shall report agencies information on other types of recaptured improper contract payments. For instance, where applicable, agencies shall also identify and report information on improper contract payments recovered, if not already included in the annual reporting, including improper contract payments voluntarily returned to agencies by contractors prior to agency or payment recapture auditor identification; improper contract payments identified by the vendors, contractors, or agency staff, and used to provide offsets to future payments rather than returned to agencies; improper contract payments identified and returned through agency Office of Inspector General efforts such as audits, reviews, or tips from the public; improper contract payments identified and recovered through management postpayment reviews other than payment recapture audits; improper contract payments identified and returned or paid through contract closeout; and payment recapture targets and performance in meeting those targets on an annual and quarterly basis. DOD’s improper payment and recovery auditing policies are in two chapters of its Financial Management Regulation (FMR). DOD uses its FMR to govern financial management within the department by establishing the requirements, principles, standards, systems, procedures, and practices necessary to comply with financial management statutory and regulatory requirements applicable to the department. DOD’s FMR chapter on improper payments that was in effect during fiscal year 2011 was issued in 2008, before IPERA was enacted. In October 2011, the department issued a revised chapter on improper payments to implement the requirements of IPERA and associated OMB guidance.during fiscal year 2011, DOD components, which include the military services and defense agencies, are to perform risk assessments, statistically estimate improper payments, identify root causes and develop corrective actions, and report improper payment information annually to According to the FMR chapter on improper payments in effect the OUSD(C). The OUSD(C) is responsible for consolidating component information and preparing department-wide improper payment reports. DOD’s FMR chapter on recovery efforts that was in effect during fiscal year 2011 was issued in 2009, before IPERA was enacted. In October 2012, the department issued its revised FMR chapter on recovery audits to implement the requirements of IPERA and associated OMB guidance. The October 2012 recovery auditing chapter was published after fiscal year 2011 was completed, and we determined that the revisions did not affect the findings in this report. In July 2009, prior to the enactment of IPERA, we reported on DOD’s efforts to address improper payments under IPIA and the recovery auditing requirements under the RAA. In that report, we made 13 recommendations aimed at improving DOD’s efforts to strengthen its improper payment and recovery auditing processes. At that time, DOD did not concur with 12 of our 13 recommendations. However, as discussed in that report, we continued to believe that all 13 recommendations were critical for DOD to enhance its efforts to minimize improper payments and recover those that were made. Figure 1 lists the recommendations from our 2009 report. The department reported improper payment information for the following eight programs in its fiscal year 2011 AFR: Military health benefits are payments made to health care providers for services provided to active duty personnel and their family members, retirees and their family members, and family members of deceased service members through the TRICARE program. Military pay includes active duty pay (Army, Navy, Air Force, and Marine Corps) as well as the reserve components’ pay (Army Reserve, Army National Guard, Navy Reserve, Air Force Reserve, Air National Guard, and Marine Corps Reserve). Civilian pay includes civilian pay accounts from each of the components (Army, Air Force, Navy/Marine Corps, and defense agencies). Military retirement pay includes both payments to military retirees and the family members of deceased retirees (annuitants). Travel pay includes travel payments made through the Defense Travel System (DTS) for the military services and defense agencies as well as additional travel payments made by the Army, Navy, and Air Force for vouchers paid outside of DTS. DFAS commercial pay includes payments made by DFAS on behalf of DOD components to vendors and contractors. USACE travel pay includes travel payments made by USACE to employees. USACE commercial pay includes contract payments made by USACE. Figure 2 shows the total outlays, improper payment totals (sum of the overpayments and the underpayments), and total improper payments as a percentage of total outlays, as reported by DOD in its fiscal year 2011 AFR. The improper payment total shown for DFAS commercial pay was not a statistical estimate, but was limited to known improper payments. As shown in figure 2, DOD identified its programs for improper payment estimation and reporting in such a way that each program represents a category of disbursements made by the department. OMB’s guidance does not specify how agencies are to identify programs for improper payment estimation and reporting, but advises that agencies determine the grouping of programs that most clearly identifies and reports improper payments for their agency. DOD did not adequately implement key provisions of IPIA, IPERA, and OMB guidance related to estimating improper payments, identifying programs susceptible to significant improper payments, reducing improper payments through corrective actions, recovering improper payments, and reporting improper payment estimates and recovery efforts. Most important, we found that DOD’s improper payment estimates were neither reliable nor statistically valid. Also, DOD did not conduct a risk assessment for fiscal year 2011 in accordance with IPERA requirements. Further, although DOD had a corrective action plan for fiscal year 2011, the plan did not identify the underlying reasons or conditions that caused the errors to occur. Additionally, DOD did not conduct recovery audits nor did it determine that such audits would not be cost effective, as required by IPERA. Finally, the department did not have procedures to ensure that improper payment and recovery audit reporting in its fiscal year 2011 AFR was complete, accurate and compliant. DOD’s improper payment estimates reported in its fiscal year 2011 AFR were neither reliable nor statistically valid because of several deficiencies in the department’s procedures as documented in its sampling methodologies. Because of DOD’s long-standing and pervasive financial management weaknesses, the department did not have complete and accurate populations of payments from which to select statistical samples. We also identified deficiencies related to its (1) sampling methodologies and (2) maintenance of key documentation supporting its improper payment estimates. The foundation of reliable statistical sampling estimates is a complete, accurate, and valid population from which to sample. However, the department’s long-standing and pervasive financial management weaknesses precluded it from validating the completeness of its payment transaction populations. For example, DOD’s fiscal year 2011 Statement of Budgetary Resources (SBR) reported nearly $1,017 billion in gross outlays in fiscal year 2011. As previously shown in figure 2, the outlays for the eight programs for which the department reported improper payments totaled $617 billion. DOD attributed most of the difference between the SBR gross outlays and outlays for the eight programs to intragovernmental transactions and trust fund transfers, which IPERA exempted from improper payment estimation and reporting requirements. However, the department was unable to reconcile these two outlay amounts. DOD acknowledged in its fiscal year 2011 AFR that reported outlays for the eight programs could not be reconciled to gross outlays reported in the SBR. As a result, DOD could not ensure that all required outlays for improper payment reporting purposes were included in the sample populations. Although DOD had documented methodologies for developing improper payment estimates, DOD did not establish and perform key quality assurance procedures, such as reconciliations, on its program populations to validate that the populations were complete and accurate before selecting the statistical samples that were used to estimate improper payments. Standards for Internal Control in the Federal Government states that control activities such as reconciliations are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. An effective reconciliation process would involve comparing transactions to supporting documentation, systems of record, or both to ensure the completeness, validity, and accuracy of financial information. An effective reconciliation process also involves resolving any discrepancies that may have been discovered and determining if unauthorized changes have occurred to transactions during processing. In addition to the lack of complete, valid, and accurate populations, deficiencies in DOD’s procedures as documented in its sampling methodologies further impaired DOD’s ability to produce reliable improper payment estimates. First, DOD’s sampling methodologies are based on the use of simple random samples to select payments to review for improper payments and thereby derive error rates. Using these methodologies, each transaction in the programs’ sample populations had an equal chance of selection without regard to the complexity of the transaction or its risk of being an improper payment. OMB guidance states that agencies will need to utilize complex sample designs to the extent their payment population contains wide-ranging dollar amounts, types of payments, or both. In addition, DOD did not use a sampling unit that was statistically appropriate for any of its programs. For example, the sampling unit for travel pay for fiscal year 2011 was the travel voucher. Each voucher had an equal chance of selection in the samples upon which improper payment estimates were based. However, DOD’s travel pay transactions range in complexity from an individual soldier’s relocation to payments made on travel vouchers involving multiple travel orders. As another example, DFAS commercial pay’s sampling unit was an individual invoice. As DOD reported in the fiscal year 2012 AFR, a $10 million invoice had the same chance of being sampled as a $100 invoice. Generally, higher dollar payments involve more complex transactions and thus are at greater risk of being an improper payment. If a population contains a few large invoices and many smaller invoices, equal probability sampling is unlikely to capture the large invoices. DOD’s sampling methodologies do not account for this risk. By not designing more complex sampling methodologies that utilize more statistically appropriate sampling units, such as dollars paid, DOD’s improper payment estimates could be significantly understated. Further, DOD provided evidence that it used its sampling methodologies to calculate statistically valid improper payment error rate estimates and related confidence intervals for military pay, civilian pay, and travel pay for fiscal year 2011, but did not provide such evidence for military health benefits, military retirement, USACE commercial pay, or USACE travel pay. Additionally, DOD did not generate statistically valid improper payment dollar value estimates for any of its programs. For instance, DOD did not use appropriate weights to calculate the reported dollar value estimates. Moreover, DOD did not derive confidence intervals for its improper payment dollar value estimates for any of its programs. Generally accepted statistical standards require the calculation and disclosure of confidence intervals around an estimate with a specified degree of confidence. Confidence intervals are a measure of the possible difference between the sample estimate and the actual population value, providing an idea of how close the sample estimate is to the actual population value. As previously mentioned, DOD did not statistically derive an estimate of improper payments for DFAS commercial pay for fiscal year 2011, but instead limited its reporting to known improper payments. Although DOD reported a statistically derived improper payment estimate— $100.1 million—for DFAS commercial pay for fiscal year 2012, the sampling methodology used to produce this estimate had deficiencies similar to the methodologies used for the department’s other programs. For example, to estimate DFAS commercial pay improper payments, DOD did not use a statistically appropriate sampling unit or a methodology that considered large dollar amounts or the level of complexity of the related payments. In its AFR for fiscal year 2012, DOD noted that the department had identified $318.3 million in known improper payments for DFAS commercial pay for fiscal year 2012. Further, DOD cited the sampling methodology as the main reason for the difference between the reported estimate of $100.1 million and the known amount of $318.3 million for DFAS commercial pay improper payments, which provides further evidence of how the deficiencies we identified in the sampling methodologies adversely affect the reliability of the resulting estimates. In our July 2009 report, we recommended that DOD develop and implement a statistically valid methodology to estimate and report commercial improper payments (contract and vendor over- and underpayments). This recommendation remains valid given the issues we have found during the course of this review. DOD did not have procedures in place to collect and maintain key supporting documentation needed to substantiate the improper payment estimates reported in its fiscal year 2011 AFR. For example, DOD did not maintain complete supporting documentation for the populations of transactions, from which statistical samples were selected, for most of the programs for which improper payment estimates were reported. This deficiency contributed to our determination that DOD’s reported improper payment estimates were not reliable. DOD officials stated that they were unaware of the extent of documentation necessary for the department to maintain to support its improper payment estimates. Standards for Internal Control in the Federal Government requires all transactions and other significant events to be clearly documented and the documentation Further, OMB guidance directs readily available for examination. agencies to retain documentation to support the calculation of their estimates. To enable auditors and other parties to substantiate reported improper payment estimates, we determined that the following documentation would generally need to be maintained: Description of the sampling methodology, including identification of the sampling units accompanied by an explanation of how the sampling units were determined. Schedules showing the total number of items and dollar value totals for each sample populationagency by program and month. for each military service and defense Descriptions and results of data reliability and quality assurance testing conducted to ensure that payment information in the sample populations was complete and accurate. Descriptions of how each sample was selected, including the random number and how it was generated, the software used to select the sample from the sample population, and copies of software program logs and related output files. The software program logs and related output files should have the details related to the sample population totals and samples selected, including the total number of items and dollar value totals in the sample population as well as a list of the items selected to be in the sample. In addition, a description of the method of selection of replacement items is needed. Descriptions of attributes and variablestransaction used to derive the total dollar value of improperly paid amounts, including source documents for each sampled transaction that support the conclusion of whether the sampled payment was improper. Calculations, spreadsheets (including cell formulas), and software programs used to evaluate the individual test results for every sample item tested, including the calculations used to derive each improper payment error rate and dollar value estimate and the related confidence intervals. Calculations, spreadsheets (including cell formulas), software programs, inputs and outputs used to aggregate the individual error rate and dollar value estimate and related confidence intervals from each sample to derive the improper payment estimates reported in the AFR. These documents should provide a clear trail showing how the results of each sample were aggregated, from the lowest level of sampling test results through to the estimates published in the AFR. Schedules listing all missing sample items and how these sample items were treated and related explanations. Schedules listing all items replaced and related explanations. The lack of complete supporting documentation precludes DOD and others from being able to determine the reliability of its reported improper payment estimates. DOD did not perform a risk assessment for fiscal year 2011 as required by IPERA, because DOD officials told us that they did not see any added value in doing a risk assessment. According to DOD officials, OMB directed the department to consider all of its programs as risk- susceptible—following its review of the department’s fiscal year 2006 improper payment reporting—because of the complex nature of the department’s business processes and the large dollar value of its annual payments. However, DOD officials were unable to provide documentation of this directive. Moreover, as discussed previously, IPERA laid out a clear statutory requirement to perform a risk assessment in fiscal year 2011, which would supersede an earlier OMB directive. Our executive guide describes characteristics of an effective risk assessment done for the purpose of determining an entity’s susceptibility to improper payments.comprehensive review and analysis of program operations to determine where risks exist and what those risks are, and then measuring the potential or actual impact of those risks on program operations. Once risk areas are identified, their potential impact on programs and activities should be measured and additional controls should be considered. As risks are addressed and controls are changed, the assessment should occasionally be revisited to determine where the risks have decreased and where new areas of risk may exist. A risk assessment is an activity that entails a By not doing a risk assessment for fiscal year 2011, DOD missed the opportunity to gain critical information for determining corrective actions needed to reduce improper payments. Periodic risk assessments are critical to ensuring that the department is identifying the root causes of improper payments and developing appropriate corrective actions. The information developed during a risk assessment forms the foundation or basis upon which management can determine the nature and type of corrective action needed. In addition, this information gives management baseline data for ensuring progress in reducing improper payments. Additionally, performing risk assessments may be more cost beneficial than estimating improper payments for each program. Given the time and resources needed to verify the completeness of populations, select and test samples, and evaluate and project the results of the samples, the department may be able to realize savings by first performing risk assessments. Moreover, if performed in a manner similar to that described in our executive guide, the information gained during the risk assessment may help DOD to determine the best sampling methodology to be used for each program, develop corrective actions, and guide recovery auditing efforts. DOD, FMR, Volume 4, Chapter 14, Improper Payments (October 2011). accordance with IPERA. For example, IPERA requires agencies to perform a risk assessment in the year after enactment (fiscal year 2011) and at least once every 3 years thereafter. However, the FMR chapter states that components are required to conduct risk assessments only for those programs or activities for which the risk level is unknown or is not currently measured and reported. DOD’s FMR does not provide a systematic approach to ensure that all programs and activities are reviewed to determine susceptibility to improper payments. Moreover, although DOD’s FMR states that components’ risk assessment methodologies must be documented and maintained, the FMR does not provide detailed requirements on what should be documented and maintained. As discussed previously, DOD’s lack of a risk assessment makes it difficult for the department to fully identify root causes and develop a comprehensive, effective, corrective action plan. While DOD has a policy for developing and reporting on corrective actions, it did not have detailed procedures for identifying root causes and related corrective actions. Also, the department’s corrective action plan, included in its fiscal year 2011 AFR, did not contain all elements of corrective action plans required by IPERA and OMB guidance, such as establishing accountability for reducing improper payments and including completion dates for implementing corrective actions. DOD’s corrective action plan reported reasons that improper payments occurred for all eight programs and included corrective actions to address them, but the reported reasons identified the type of errors that resulted in the improper payments, rather than the root causes—the underlying conditions that caused the errors to occur. DOD’s identified reasons do not consider possible underlying systemic causes of the errors, such as whether manual and automated controls were either not sufficient or not operating as intended. As a result, the related corrective action(s) addressed specific errors and not necessarily the underlying condition that gave rise to the error. Agencies, when developing corrective action plans, can use the results of risk assessments to ensure that the root causes leading to improper payments are identified. Also, the corrective actions reported were not sufficiently detailed to assess whether they would address the errors that were identified for DOD’s reported programs. For example, in fiscal year 2011, DOD reported that the corrective actions for military pay consisted of working with the military services to advise them of the results of payment reviews and the associated reasons for errors, including the provision of monthly reports on the reasons for individual improper payments and improper payment trends. While these corrective actions provide information on the reasons for improper payments, they do not indicate what, if any, actions the military services would take to address the causes of improper payments. DOD’s corrective action plan also did not describe the required steps for ensuring that responsible officials are held accountable for reducing improper payments, as required by IPERA. In addition, according to the United States Chief Financial Officers Council’s (CFOC) Implementation Guide for OMB Circular A-123, agencies should have procedures for tracking the status of corrective action plans. The implementation guide provides that corrective action plans should include measurable indicators of compliance and resolution for assessing and validating progress throughout the resolution cycle. However, DOD’s corrective action plan did not include (1) a timetable for when the corrective actions were to be implemented and (2) measurable indicators of compliance and resolution, which include follow-up tests to verify whether procedures and controls are working, to assess and validate progress in reducing improper payments. GAO-09-442. those actions to ensure that future improper payments will be reduced or eliminated. However, the FMR does not provide detailed procedures for the components to follow to identify root causes and develop corrective actions, and for the department to follow in monitoring the implementation of those corrective actions. DOD has not yet implemented our recommendations and told us that it was not planning any significant changes to its corrective action processes. Until DOD develops and implements detailed procedures that include the information required by IPERA and OMB guidance and recommended by best practices, DOD will continue to be hindered in its ability to (1) develop corrective action plans that address root causes, (2) effectively monitor and measure the progress made in taking those corrective actions, (3) hold individuals responsible for implementing corrective actions, and (4) communicate to agency leaders and key stakeholders the progress made toward remediating improper payments. We identified multiple deficiencies and omissions in DOD’s efforts to implement IPERA’s recovery audit requirements due to a lack of appropriate procedures. DOD neither conducted recovery audits in fiscal year 2011 nor determined that such audits would not be cost effective, as required by IPERA. Further, most DOD programs did not identify and collect cost information for their recovery efforts that would permit cost- effectiveness evaluations, and the programs that did collect this information did not subsequently evaluate the programs to ensure that they were, in fact, cost effective. We also identified deficiencies and omissions in the payment recapture audit plan that DOD submitted to OMB. DOD did not conduct recovery audits for the eight programs for which it reported improper payments in fiscal year 2011, nor has it determined that such audits would not be cost effective (i.e., that the government would not suffer additional financial losses because of ineffective recovery programs), as required by IPERA, because of outdated policy and a decision to rely on other recovery mechanisms. DOD’s FMR chapter on recovery efforts that was in effect during fiscal year 2011 was issued in 2009, before IPERA was enacted. As a result, the FMR chapter did not account for the expansion of recovery audits beyond commercial payments, as called for by IPERA. In addition, DOD cited its difficulties in tracing transactions back to source documentation as a major obstacle to conducting effective recovery audits. In lieu of conducting cost-effective recovery audits, DOD’s payment recapture audit plan stated that the department would rely on efforts such as random sampling of improper payments, DOD Inspector General (IG) and other auditor findings, self-reporting by recipients, and other activities, such as periodic independent reviews of commercial payments, to identify overpayments for potential recovery for seven programs. However, DOD did not describe any improper payment recovery effort in place for travel pay in the payment recapture audit plan. According to the fiscal year 2011 AFR, DOD estimated $238.2 million in overpayments related to travel pay during fiscal year 2011. Through procedures, including analysis of duplicate payments from fiscal years 2009 and 2010, DOD identified for recovery $1.6 million in travel pay overpayments, or less than 1 percent of the program’s estimated improper payments. DOD officials stated that they believe that a significant portion of the estimated improper overpayment amount was due to missing supporting documentation and did not represent funds owed to the federal government. However, DOD was not able to quantify how much, if any, of the overpayment estimate was due to missing documentation. Because DOD has not established recovery audits to recapture improper overpayments, and has not determined that such mechanisms would not be cost effective, DOD is at risk of forgoing the detection and recovery of potentially substantial funds owed to the government. GAO-09-442. establish the DOD program to implement the requirements of IPERA and associated OMB guidance with respect to recovery audits. The revised FMR directs all components with programs and activities with annual payments that exceed $1 million to determine if instituting recovery audits is cost effective. However, our review of the 2012 FMR chapter indicates that the guidance is still lacking some key elements that would enable DOD components to fully implement IPERA and OMB guidance. For example, the 2012 FMR chapter does not require the components to submit information to the OUSD(C) that OMB directs agencies to report, such as the amount of commercial pay recoveries that were used to offset future payments and the amount of improper overpayments identified through contract closeouts. Consequently, DOD has not yet fully implemented our 2009 recommendation. We found that DOD, with the exception of USACE, did not have procedures to identify and collect information on costs related to its payment recovery efforts. As a result, DOD did not determine if its ongoing recovery efforts, such as periodic independent reviews of commercial payments, were cost effective or if it would be cost effective for the department to establish and implement recovery audits for its programs. Even when DOD did determine the cost of certain improper payment recovery efforts, the department did not ensure that the efforts were cost effective. For example, USACE officials told us that the agency’s daily automated review system, which uses a data mining process to review contract payments for potential errors, costs $64,000 annually. However, USACE reported that as of December 2012, the system had only identified and recovered one improper payment of $20.79 since its implementation in May 2009. A USACE official stated that she believed that the data mining process was mandatory and that USACE was attempting to keep the cost as low as possible. By not assessing the cost- effectiveness of the daily automated review system, DOD is at risk of operating an improper payment recovery effort that is not cost beneficial. As stated previously, in October 2012, DOD issued an updated FMR chapter on recovery audits that directed all components with programs and activities with annual payments that exceed $1 million to determine if instituting recovery audits is cost effective. Further, the FMR chapter directed DOD components to report the total cost of their respective recovery audits and related recovery efforts. However, until the department establishes procedures to consistently identify and collect information regarding costs of recovery audits, DOD will be unable to implement the FMR policy and determine if recovery audits are cost effective to operate. OMB directed agencies to prepare and submit to both OMB and the agency’s IG a payment recapture audit plan by January 2011 that describes payment recapture efforts under both IPERA and authorities that pre-dated IPERA. developed, would help an agency manage its activities to maximize recovery of improper payments. DOD developed and submitted a payment recapture audit plan to OMB and the DOD IG in January 2011. In response to OMB and DOD IG comments, DOD revised and resubmitted its plan in November 2011. However, we found that DOD’s payment recapture plan did not include the following required elements: Payment recapture audit plans, if properly The quantity and dollar amount of payment reviews (except for USACE, which indicated quantity of items reviewed). Types of tools used to review payments (except for USACE, which disclosed an Oracle-programmed data mining tool). When the payments that were reviewed were made. A description of whether the payment recapture audit program focuses on programs or particular steps in a program’s payment process that are at higher risk of fraud, waste, and abuse. A description of the guidance that DOD provides to agency staff related to responsibilities and procedures to implement mechanisms to recover improper payments. Technology being used or planned that would assist in preventing and recapturing improper payments. By not developing and implementing a payment recapture audit plan that contains all elements required by OMB, DOD is not in compliance with OMB requirements and is hindered in its ability to effectively manage its recovery efforts. OMB, Memorandum M-11-04. DOD did not have documented procedures to ensure that improper payment reporting in the AFR was complete, accurate, and in compliance with statutory and regulatory guidance. We identified multiple reporting omissions in DOD’s fiscal year 2011 AFR. For example, the department did not include the following information required by IPERA and OMB guidance: corrective actions that would address the root causes of military health benefits improper payments; the actual or planned completion dates for corrective actions; the portion of the improper payment estimates attributable to insufficient supporting documentation or administrative errors; and whether the agency had the human capital, internal controls, and accountability mechanisms necessary to reduce improper payments. DOD also did not have documented procedures to ensure that recovery audit reporting in the AFR was complete, accurate, and in compliance with statutory and regulatory guidance. We identified instances where DOD’s reporting of efforts to recover improper payments did not include all information required by OMB guidance. For example, DOD did not disclose the following in the fiscal year 2011 AFR: the amount of contract payments that was voluntarily returned to DOD; the amount of improper contract payments identified by contract closeouts; the amount of commercial pay recoveries used to offset future payments rather than returned to DOD; improper payments identified as a result of DOD IG investigations, GAO audits, or reviews by DOD internal review offices, such as TMA’s Program Integrity Office, in its table of overpayments recaptured outside of payment recapture audits; and the total amount of and justification for identified improper overpayments that were determined to be uncollectible in fiscal year 2011. An OUSD(C) official told us that the OUSD(C) does not have standard operating procedures for the compilation, review, and reporting of improper payment and recovery audit information in its AFR. This OUSD(C) official stated that the department uses Appendix C of OMB Circular No. A-123, Requirements for Effective Measurement and Remediation of Improper Payments; OMB Circular No. A-136; and other relevant OMB instructions to prepare the improper payment addendum. However, as evidenced by the omissions in DOD’s improper payment and recovery audit reporting for fiscal year 2011, the department’s current process is not producing reports that comply with IPERA and OMB guidance. Standards for Internal Control in the Federal Government provides that internal control activities need to be clearly documented. documented procedures for the compilation and review of improper payment and recovery audit information, DOD is at risk of continuing to publish incomplete and inaccurate reports. Further, DOD has not yet implemented our 2009 recommendations that the DOD Comptroller perform oversight and monitoring activities to ensure the accuracy and completeness of the improper payment and recovery audit data submitted by DOD components for inclusion in the AFR. During our current review, we identified multiple instances when DOD’s oversight and monitoring of improper payment and recovery audit data submitted by components for inclusion in the AFR did not identify errors or omissions. For example, we identified a calculation error in the military health benefits improper payment estimate. Specifically, TMA used an improper denominator to calculate the improper payment rate for its sample. Instead of dividing the dollar amount of identified improper payments in the sample by the dollar amount paid to providers, which would provide a percentage of improper payments, TMA divided the dollar amount of identified improper payments by the dollar amount billed by providers for the services rendered. As a result, TMA’s improper payment rate of 0.24 percent, as reported in DOD’s fiscal year 2011 AFR, was incorrect. This calculation error was not identified by OUSD(C) personnel during their review of DOD component submissions of data for inclusion in the AFR. GAO/AIMD-00-21.3.1. In addition, OUSD(C) oversight and monitoring did not detect the overstatement of overpayment amounts identified and recovered for military retirement. Upon receiving a notification of death for a retiree or annuitant, DFAS records the entire amount of the last payment made to the retiree or annuitant as an overpayment identified and, after collecting the funds, as an overpayment recovered. However, the identified and recovered amounts recorded did not take into account that the retiree or his/her survivor may have been entitled to receive a portion, if not all, of the payment. As a result, the amounts that DFAS reported to the OUSD(C) as military retirement overpayments identified and recovered were overstated. This is one reason why DOD reported for fiscal year 2011 that the department had identified $67.6 million in military retirement improper overpayments for recovery, while estimating that only $18.8 million in military retirement improper overpayments had occurred. Further, the OUSD(C) oversight and monitoring did not identify omissions in TMA’s submission regarding overpayments identified outside of recovery audits. In its submission to OUSD(C), TMA did not include improper overpayments identified through its Program Integrity Office, which is responsible for prevention, detection, investigation and control of TRICARE fraud, waste, and abuse. As noted above, the OUSD(C) subsequently omitted this information from its fiscal year 2011 AFR reporting. Therefore, our prior recommendations for the DOD Comptroller to perform oversight and monitoring activities to ensure the accuracy and completeness of the improper payment and recovery audit data submitted by DOD components for inclusion in the AFR remain valid given the findings of this review. Although DOD reported estimated and known improper payments of over $1.1 billion for fiscal year 2011, this amount cannot be relied upon because of the deficiencies we found related to DOD’s procedures for identifying, estimating, reducing, recovering, and reporting improper payments. DOD’s long-standing history of pervasive financial management weaknesses, coupled with problematic sampling methodologies and the lack of adequate supporting documentation, contributed to improper payment estimates that were not reliable. Further, DOD has not established the procedures needed to effectively implement the improper payment and recovery auditing requirements included in IPERA and OMB’s implementing guidance. By not performing a risk assessment as required by IPERA, DOD did not reap the associated benefits, including the ability to better identify root causes and develop a comprehensive and effective corrective action plan to reduce improper payments. DOD’s lack of a detailed and effective corrective action plan also made it difficult for department officials to monitor and measure the extent of progress made to remediate causes, hold individuals responsible for implementing corrective actions, or communicate to DOD leadership and other key stakeholders the extent of the department’s progress in remediating the causes of improper payments. In addition, DOD did not comply with the IPERA requirement to either conduct recovery audits or provide justifications that such audits would not be cost effective. Finally, the department’s lack of key required information in its fiscal year 2011 AFR precludes DOD’s leadership and external stakeholders from determining whether DOD has the necessary human capital, internal controls, and accountability mechanisms to reduce improper payments. Until the department takes definitive action to address these deficiencies and thereby fulfills the requirements of IPERA and its implementing guidance, it remains at risk of continuing to make improper payments and wasting taxpayer funds. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following 10 actions: With regard to estimating improper payments: Establish and implement key quality assurance procedures, such as reconciliations, to ensure the completeness and accuracy of the sampled populations. Revise the procedures documented in DOD’s sampling methodologies so that they (1) are in accordance with OMB guidance and generally accepted statistical standards and (2) produce statistically valid improper payment error rates, statistically valid improper payment dollar estimates, and appropriate confidence intervals for both. At a minimum, such procedures should take into account the size and complexity of the transactions being sampled. Develop and implement procedures to collect and maintain the supporting documentation necessary to support improper payment estimates. With regard to identifying programs susceptible to significant improper payments, conduct a risk assessment that is in compliance with IPERA. With regard to reducing improper payments, establish procedures that produce corrective action plans that: Comply fully with IPERA and OMB implementation guidance, including at a minimum, holding individuals responsible for implementing corrective actions and monitoring the status of the corrective actions. Are in accordance with best practices, such as those recommended by the CFOC, and include (1) measuring the progress made toward remediating root causes and (2) communicating to agency leaders and key stakeholders the progress made toward remediating the root causes of improper payments. With regard to implementing recovery audits: Develop and implement procedures to (1) identify costs related to the department’s recovery audits and existing recovery efforts and (2) evaluate existing improper payment recovery efforts to ensure that they are cost effective. Monitor the implementation of the revised FMR chapter on recovery audits to ensure that the components either develop recovery audits or demonstrate that it is not cost effective to do so. Develop and submit to OMB for approval a payment recapture audit plan that fully complies with OMB guidance. With regard to reporting, design and implement procedures to ensure that the department’s annual improper payment and recovery audit reporting is complete, accurate, and in compliance with IPERA and OMB guidance. We provided a draft of this report to the Secretary of Defense for comment. In response, DOD provided written comments, in which it concurred with nine recommendations and partially concurred with one recommendation. In commenting on our report, DOD acknowledged that implementing our recommendations would further strengthen its program. DOD cited its planned actions, including (1) reviewing its sampling methodologies to ensure that they are appropriate and properly documented; (2) developing risk assessments and corrective actions in accordance with IPERA, OMB guidance, and best practices; (3) reviewing its recovery efforts to ensure that they are cost effective; and (4) ensuring that its reporting is complete, accurate, and in compliance with IPERA and OMB guidance. DOD partially concurred with our recommendation to revise the procedures documented in its sampling methodologies so that they (1) are in accordance with OMB guidance and generally accepted statistical standards and (2) produce statistically valid improper payment error rates, statistically valid improper payment dollar estimates, and appropriate confidence intervals for both. The department believes that its sampling methodologies are in accordance with OMB guidance and produce statistically valid improper payment rates and appropriate confidence intervals. However, as discussed in our report, we found that DOD produced statistically valid error rates and related error rate confidence intervals for only three of its programs. Additionally, we found that DOD did not produce statistically valid dollar estimates and appropriate dollar confidence intervals for any of its programs. However, DOD did state that it will review methodologies for all payment types and make modifications as appropriate. DOD also expressed concern that our characterization of the recommended improvements as “significant” does not account for its efforts to minimize improper payments, particularly through prepayment reviews. We acknowledge in our report efforts DOD has made to attempt to minimize improper payments. However, the deficiencies we identified related to DOD’s identifying, estimating, reducing, recovering, and reporting improper payments are significant and indicate that it has not yet established the detailed procedures necessary to effectively implement IPERA and OMB guidance and thus reduce the risk of making improper payments. Accordingly, we continue to believe that implementation of our recommendations is critical for DOD to enhance its efforts to minimize improper payments and to recover those that are made. DOD’s comments are reprinted in appendix II. DOD also provided technical comments on our draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Asif A. Khan at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of this engagement was to review the extent to which the Department of Defense (DOD) has implemented key provisions of the Improper Payments Information Act of 2002 (IPIA), the Improper Payments Elimination and Recovery Act of 2010 (IPERA), and related Office of Management and Budget (OMB) guidance. The scope for our engagement was DOD’s improper payments information presented in the department’s fiscal year 2011 Agency Financial Report (AFR), because this was the most current annual report available at the time of our review. As part of this objective, we assessed DOD’s plans and actions to estimate improper payments for commercial payments made by the Defense Finance and Accounting Service (DFAS) for fiscal year 2012, because DOD limited its improper payment reporting for this program to known improper payments for fiscal year 2011, rather than reporting a statistical estimate. Although DOD reported improper payment information for eight programs, statistical estimates were provided for only seven of those programs for fiscal year 2011. DOD’s improper payments reporting for DFAS commercial pay for fiscal year 2011 was limited to known improper payments. We interviewed officials from DFAS, the TRICARE Management Activity (TMA),obtain additional information about these methodologies. and the U.S. Army Corps of Engineers (USACE) to To assess the department’s plans and actions for estimating DFAS commercial pay improper payments for fiscal year 2012, we reviewed the department’s methodology for statistically estimating DFAS commercial pay improper payments for fiscal year 2012 and interviewed DFAS officials to obtain clarifications about this methodology. We conducted site visits at two DFAS processing center locations— DFAS-Columbus and DFAS-Indianapolis. We selected the DFAS- Columbus site because this facility processes the largest portion of DOD’s commercial payments and hosts the systems the department uses to track commercial pay improper payments. We interviewed DFAS-Columbus officials regarding how improper payments were identified and reported. We selected DFAS-Indianapolis because this facility houses the team that performed the reviews of selected sample transactions for military pay, civilian pay, military retirement for deceased retirees and annuitants, and travel pay. DFAS-Indianapolis compiles the results of the improper payment testing for all DFAS- tested programs, including DFAS commercial pay and military retirement pay, and reports these results to DOD’s OUSD(C). We examined the department’s corrective action plan and assessed it with the requirements in OMB’s implementing guidance for IPERA, OMB Circular No. A-136, and best practices suggested by the United States Chief Financial Officers Council (CFOC). We followed up with DOD officials, including the Improper Payments Project Officer, to obtain additional information about the department’s corrective action plan. We analyzed DOD’s payment recapture plan, DOD’s FMR chapter on and information in the AFR. We interviewed DFAS, recovery audits,TMA, and USACE officials, as well as the Improper Payments Project Officer, about the processes used to recover improper payments. To assess DOD’s implementation of the reporting requirements in IPERA and OMB’s guidance, we compared the improper payment information provided in DOD’s fiscal year 2011 AFR to the reporting requirements. We interviewed OUSD(C), DFAS, TMA, and USACE officials about the department’s process to compile the information reported in the AFR. To assess the reliability of data reported in DOD’s fiscal year 2011 AFR related to improper payments, we reviewed DOD’s supporting documentation and interviewed knowledgeable agency officials about the data. In the course of this assessment, we determined that DOD did not collect and maintain the supporting documentation necessary to substantiate the improper payment estimates reported in its fiscal year 2011 AFR. In addition, the department did not perform key quality assurance procedures, including reconciliations on all of the populations for its programs to validate that the populations were complete, valid, and accurate before selecting the statistical samples that were used to estimate improper payments. Therefore, we determined that the data were not reliable. These problems are discussed in our report. We conducted this performance audit from October 2011 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cindy Brown Barnes (Assistant Director), Sharon Byrd (Assistant Director/Audit Sampling), Michael Bingham, and Sandra Silzer made key contributions to this report. Also contributing to this report were Francine DelVecchio, Justin Fisher, Wilfred Holloway, and Jason Kirwan.
DOD reported $1.1 billion in improper payments for fiscal year 2011, which marked the eighth year of implementation of IPIA, as well as the first year of implementation of IPERA. IPIA required executive branch agencies to annually identify programs and activities susceptible to significant improper payments, estimate the amount of improper payments for such programs and activities, and report these estimates along with actions taken to reduce them. IPERA amended IPIA and expanded requirements for recovering overpayments across a broad range of federal programs. GAO was asked to review the progress DOD has made to identify, estimate, and reduce improper payments. GAO's objective was to review the extent to which DOD has implemented key provisions of IPIA, IPERA, and OMB guidance. GAO reviewed improper payment requirements; analyzed agency financial reports, internal guidance and plans, and sampling methodologies; and interviewed cognizant officials. The scope for this engagement was DOD's reported improper payment information for fiscal year 2011 and DOD's plans and actions to estimate commercial pay improper payments for fiscal year 2012. The Department of Defense (DOD) did not adequately implement key provisions of the Improper Payments Information Act of 2002 (IPIA) and the Improper Payments Elimination and Recovery Act of 2010 (IPERA) and Office of Management and Budget (OMB) requirements for fiscal year 2011. Most important, GAO found that DOD's improper payment estimates reported in its fiscal year 2011 Agency Financial Report were neither reliable nor statistically valid because of long-standing and pervasive financial management weaknesses and significant deficiencies in the department's procedures to estimate improper payments. For example, DOD did not have key quality assurance procedures in place, such as reconciliations, to validate the completeness and accuracy of the populations used to estimate improper payments; develop appropriate sampling methodologies for estimating improper payments; maintain key documentation supporting its reported improper payment estimates. Also, GAO found significant deficiencies in DOD's policies and procedures to address other key improper payment requirements for fiscal year 2011. Specifically, DOD did not have procedures to identify root causes of improper payments and develop related corrective actions, conduct recovery audits for any of its programs or determine that these audits would not be cost effective, and have procedures to ensure that its annual improper payment and recovery audit reporting is complete, accurate, and in compliance with IPERA and OMB reporting requirements. DOD has taken some actions since fiscal year 2011, such as reporting a statistical estimate for Defense Finance and Accounting Service commercial pay and issuing revised Financial Management Regulation chapters on improper payments and recovery audits. However, until the department takes action to correct the deficiencies GAO found related to identifying, estimating, reducing, recovering, and reporting improper payments and thereby fulfills legislative requirements and implements related guidance, it remains at risk of continuing to make improper payments and wasting taxpayer funds. GAO is making 10 recommendations to improve DOD's processes to identify, estimate, reduce, recover, and report on improper payments. DOD concurred with 9 and partially concurred with 1 of the recommendations and described its plans to address them.
On June 10, 1975, the U.S. government executed a Memorandum of Understanding with the governments of Belgium, Denmark, the Netherlands, and Norway to produce F-16 aircraft under a program known as the F-16 Multinational Fighter Program. Of the 998 aircraft produced under this program, the U.S. Air Force purchased 650 and the European participating governments purchased the remaining 348. Under the ongoing MLU program, the Europeans are upgrading their F-16 aircraft by equipping them with new cockpits and avionics systems. On behalf of the four European participating governments, the U.S. Air Force awarded prime contracts to Lockheed Martin Tactical Aircraft Systemsand Northrop Grumman Corporation valued at $622.7 million and $106.5 million, respectively, to provide the aircraft upgrades. The U.S. government participated in the development phase of the MLU program, but it withdrew from the production phase in November 1992. The European countries’ Supreme Audit Institutions (SAIs) have raised a number of issues regarding the pricing of the MLU contracts. The U.S. and European participating governments agreed that they would “endeavor to establish the same price for the same articles when they were procured under the same conditions from the same source.” Due to the proprietary nature of the information affecting the negotiation of the contracts, SAIs are precluded from having access to this information. On December 15, 1994, a meeting, involving representatives from the U.S. and the European participating governments, was held during which agreement was reached to provide assurance that the MLU contract prices were fair and reasonable. Among the issues discussed were the rates and factors used to price the MLU contracts. According to the minutes of the meeting, the European representatives were assured that the “. . . rates and factors that are used for MLU contracts are the same for all other LFWC [Lockheed Fort Worth Company] F-16 contracts with the U.S. Government.” Since these rates and factors are proprietary, the Netherlands representative asked if the United States could provide certification that the same rates are used on all U.S. government contracts. The Defense Plant Representative Office Commander agreed to provide the certification and did so on March 24, 1995. Lockheed Martin and Northrop Grumman proposed and Air Force negotiators used rates and factors to price the two MLU prime contracts that were different from those used to price contemporaneous U.S. government contracts. Also, Air Force negotiators used two incorrect rates in pricing the Northrop Grumman prime contract. These two conditions increased the prime contract prices by a total of $9.4 million. The rates and factors used to price the Lockheed Martin MLU contract were not the same as those used to price U.S. government contracts. Instead, on December 23, 1994, Lockheed Martin proposed a “special” set of rates to price the MLU contract rather than using the lower FPRA rates in effect at that time. The Air Force used the special rates in negotiating the MLU contract prices. This action increased the contract price by $8 million. During the December 1994 working group meeting involving U.S. and European representatives, the Defense Plant Representative Office Commander stated he would certify that the rates used to price the MLU contract would be the same as those used to price all U.S. government contracts. Subsequently, in a March 24, 1995, written certification, the Commander stated “. . . that the applicable FPRA rates and factors used in the MLU program are the same as all other programs negotiated between the LFWC and the U.S. Government.” However, contrary to the Commander’s certification, the Air Force negotiated two other contracts with Lockheed Martin on the same day the MLU contract was negotiated using lower FPRA rates and factors. Neither Lockheed Martin nor the Air Force withdrew from the FPRA that was in effect at the time the MLU contract price was agreed to. The Defense Federal Acquisition Regulation Supplement stipulates that FPRA rates must be used to price contracts unless waived by the head of the contracting activity. No such waiver was requested or obtained for the special rates used to price the MLU contract. Furthermore, there was no evidence in the contract negotiation records or files that the special rates were audited by DCAA or approved for use by the Defense Plant Representative Office. Lockheed Martin proposed and Air Force negotiators used the lower FPRA rates to establish the negotiation objective for the contract price. Before contract price agreement was reached, however, Lockheed Martin provided Air Force negotiators the special set of rates and factors that they accepted and used to price the contract. Lockheed Martin officials told us a special set of rates and factors was required to negotiate the MLU contract because the existing FPRA was only valid through calendar year 1997. They explained that the MLU contract performance period covered calendar years 1993 through 2001 and that rates and factors for the outyears were required. They believe that the special rates benefited the MLU customers because a new FPRA, negotiated shortly after the MLU contract, included higher rates than those used for the MLU contract. In responding to a draft of this report, the Air Force agreed a special set of rates and factors was used to price the MLU contract, but it believed the use of those rates and factors was in the best interest of the European participating governments. The Air Force also stated that the Defense Plant Representative Office Commander signed the certification in good faith, based on his knowledge at that time, and with full intention of being consistent with the pricing agreement between the U.S. and the European participating governments. The Air Force further stated that the Defense Plant Representative Office was negotiating a new FPRA while MLU contract negotiations were going on and had already offered Lockheed Martin higher rates and factors than were in the existing FPRA. The Air Force pointed out that Lockheed Martin would never have accepted the lower existing FPRA rates and factors, which covered the period 1993 through 1997. We agree that the certification was signed in good faith. We also agree that the existing FPRA extended only through 1997 and that rates and factors were needed to cover the MLU contract performance period. However, when changing conditions cause rates in an FPRA to be no longer valid, defense procurement regulations provide approved methods for dealing with the situation—either withdraw from the rate agreement or obtain a waiver from the head of the contracting activity. Air Force negotiators did neither. We found that the Defense Plant Representative Office had issued recommended rates and factors covering 1998 and 1999. Thus, Air Force negotiators—using the existing FPRA and recommended rates—had rates and factors covering 1993 through 1999. According to negotiation records, this period accounted for 99 percent of the MLU contract value. Furthermore, the $8-million increase to the MLU contract is not due to higher rates and factors for the years beyond the FPRA period. Rather, the increase is due to increased rates and factors for 1993 through 1997—the same period covered by the existing FPRA. In addition, the MLU contract awarded to Northrop Grumman for radar systems encountered the same situation as the Lockheed Martin contract—that is, it extended beyond the period covered by the existing FPRA. However, in contrast to the Lockheed Martin situation, the Air Force used existing FPRA rates and factors to price the radar contract. The contract performance period extended into the year 2002, while the existing FPRA went through only 1996. Northrop Grumman proposed and the Air Force used the existing FPRA rates and factors and projected these rates and factors over the remaining contract performance period. Northrop Grumman proposed and the Air Force accepted a G&A overhead rate established for pricing foreign military sales contracts rather than a lower domestic rate established for pricing U.S. government contracts. Use of the G&A rate for foreign military sales contracts increased the MLU contract price by $1.3 million. Northrop Grumman officials told us they used the G&A rate for foreign military sales contracts because of the additional costs in doing business with foreign customers. They also stated they were unaware of any requirement to use the same rates applied to U.S. government contracts. They further stated that such a requirement was not made known to the corporation in the Air Force’s request for proposal or subsequent contract award. In commenting on a draft of this report, the Air Force pointed out that use of the foreign military sales G&A rate was proper on the Northrop Grumman MLU contract. The Air Force advised us that the contractor could not use and the Air Force could not accept the domestic G&A rate for pricing the contract because it would be a misallocation of costs. The Air Force also pointed out that use of the foreign military sales G&A rate did not violate the intent or the spirit of the agreement between the U.S. and the European participating governments. It should be noted that while the Air Force contends that it would have been improper to use the domestic G&A rate for pricing the Northrop Grumman contract, the Air Force used a domestic G&A rate to price the Lockheed Martin MLU contract. The Air Force did not explain this inconsistency. In addition to using the higher G&A rate for foreign military sales contracts, Air Force negotiators used two incorrect rates in pricing the MLU contract, which caused its price to be increased by $163,600. The Air Force concurred that use of the incorrect rates was an oversight. In total, the MLU contract price was increased by $1.4 million as a result of using the higher G&A rate for foreign military sales contracts and two incorrect rates. DCAA conducted preaward audits of both prime contract proposals and questioned various costs. DCAA also reported large amounts of proposed subcontract costs as unresolved because several subcontractor price proposals had not been audited at the time of its preaward audits. Price negotiation memorandums showed DCAA helped the Air Force evaluate updated contractor proposals during fact-finding prior to contract price negotiations. In addition to making specific recommendations on proposed costs, DCAA also provided Air Force negotiators with information on deficiencies in the contractors’ estimating systems, material management and accounting systems, and other operations. The price negotiation memorandums clearly show that Air Force negotiators used DCAA recommendations to assist in establishing objectives and negotiating lower prices for the two prime contracts. The memorandum for the Lockheed Martin contract, for example, shows DCAA reported a substantial amount of proposed subcontract costs as unresolved because audits of the subcontracts had not been completed at the time of DCAA’s review. DCAA reported the same condition for the Northrop Grumman contract. Audits of the subcontractor proposals were subsequently obtained, and Air Force negotiators used the information in negotiating the contract prices. Air Force negotiators also used other DCAA recommendations in negotiating the prices of the contracts. On the Northrop Grumman contract, for example, they extensively used DCAA’s recommendations on proposed material costs. The price negotiation memorandum showed Air Force negotiators were able to obtain most of DCAA’s recommended cost reductions for material. We reviewed the fairness and reasonableness of subcontract and material costs negotiated in the prime contracts because these costs comprised about 88 percent of the combined negotiated contract prices. Subcontracts and material under the Lockheed Martin contract totaled $572.7 million, or about 92 percent, of the $622.7-million contract price. Subcontracts and material under the Northrop Grumman contract comprised $66.2 million, or about 62 percent, of the $106.5-million price. For competitively priced subcontracts, we examined the supporting records and, if adequate competition occurred, we accepted the prices as fair and reasonable. For noncompetitively priced subcontracts, we examined the negotiation records to determine if appropriate safeguard techniques were used to negotiate the prices. At the time of the prime contract price agreement dates, Lockheed Martin had negotiated firm prices for 10 of its 11 major subcontracts, and Northrop Grumman had negotiated firm prices for both of its major subcontracts. The contractors used the pricing techniques required by the Federal Acquisition Regulation in negotiating subcontract prices. Subcontract files and other records showed that Lockheed Martin and Northrop Grumman (1) obtained cost or pricing data, (2) conducted cost analyses, (3) conducted price negotiations, and (4) and obtained certificates of current cost or pricing data. The cognizant Defense Plant Representative Offices also obtained audits from DCAA or the participating governments’ audit agencies of the subcontractor price proposals and provided the audit reports to Air Force negotiators. For the subcontract that was not priced at the time of prime contract price agreement, Lockheed Martin, as required by the Federal Acquisition Regulation, obtained cost or pricing data from the subcontractor and prepared a cost analysis of the subcontract proposal. Air Force negotiators accepted the proposed and negotiated subcontract prices as fair and reasonable based on the prime contractors’ evaluation and negotiation efforts. We did not examine material items on the Lockheed Martin contract because they comprised less than 1 percent of the contract price. As for the Northrop Grumman contract, we examined the pricing of selected material items because material costs comprised about 9 percent of the contract price. Northrop Grumman used appropriate safeguard techniques to price material items. None of the eight high dollar items we selected for review were priced at the time of prime contract price agreement. Northrop Grumman based its proposed prices for four of the items on supplier competitive quotations. Northrop Grumman received multiple quotations for the four items; therefore, we accepted the competitive prices as fair and reasonable. Northrop Grumman based its proposed prices for the other four items on noncompetitive quotations, and it conducted price analyses for the items. For two of the items, the price quotations fell below the maximum prices established by the price analyses, and Northrop Grumman accepted the proposed prices as fair and reasonable. Quotations for the other two items were higher than the maximum price established by the price analyses, and Northrop Grumman decremented the quotations and submitted the lower prices to Air Force negotiators. During prime contract price negotiations, Air Force negotiators applied an additional decrement against the proposed prices for all eight items. There are indications that material is overpriced by as much as $947,000 under the two prime contracts because the prime contractors did not provide government negotiators with accurate, complete, and current data available for the items at the time of the contract price agreement dates. We provided this information to the cognizant DCAA offices, and they are reviewing material prices in both prime contracts to determine the extent of overpricing. The amount of overpricing may change as DCAA continues its review. As requested, we reviewed the pricing of the subcontracts Lockheed negotiated with Hazeltine for the advanced identification friend or foe system and with Honeywell for the color multifunction display system. The Hazeltine subcontract was awarded on a competitive basis, while the Honeywell subcontract was awarded on a noncompetitive basis. The subcontract awarded to Hazeltine was competed between Hazeltine and three other vendors. Lockheed Martin subjected the responsive proposals to a technical evaluation, management evaluation, risk analysis, and cost evaluation and determined that Hazeltine had the lowest risk approach with the highest probability of successful completion. Hazeltine was the only supplier that proposed to meet all of the technical requirements. Lockheed Martin concluded Hazeltine’s proposed price was fair and reasonable and awarded the subcontract. Air Force negotiators also accepted the subcontract price as fair and reasonable. Lockheed Martin used the same safeguard techniques in negotiating the Honeywell subcontract that are required to be used in negotiating subcontracts under U.S. government prime contracts. There was not an FPRA with Honeywell at the time the subcontract price was negotiated; however, recommended rates and factors had been issued for Honeywell contracts. Lockheed Martin used the recommended rates and factors in negotiating the subcontract price. Air Force negotiators accepted the negotiated price as fair and reasonable. Air Force and contractor officials reviewed a draft of this report and their comments have been incorporated in the text where appropriate. Their comments are presented in their entirety in appendixes I, II, and III. SAIs selected two prime contracts for review. The first prime contract involved the letter contract the Air Force awarded to Lockheed Martin on August 17, 1993. The contract provides for the production of modification kits to upgrade the cockpit and avionics systems on the F-16 aircraft. The Air Force and Lockheed Martin agreed on the contract price on April 21, 1995, and the final contract was signed on June 13, 1995. The second prime contract involved a letter contract the Air Force awarded to Northrop Grumman on December 3, 1993. The contract provides for the production of modification kits for the AN/APG-66(V)2 fire control radar. The Air Force and Northrop Grumman agreed on the contract price on July 15, 1994, and the final contract was signed on September 27, 1994. SAIs also selected two subcontracts for review. Both were awarded under the prime contract to Lockheed Martin. The first involved the subcontract Lockheed Martin awarded to Honeywell (purchase order 354) on October 30, 1995, for the production of the F-16 color multifunction displays. The second involved the subcontract Lockheed Martin awarded to Hazeltine (purchase order 4XU) on September 24, 1993, for the production of the advanced identification friend or foe combined interrogator/transponder system. To determine whether the rates and factors used to price the two MLU prime contracts were the same as those used to price U.S. government contracts, we reviewed Air Force negotiation records to identify the rates and factors used for the MLU contracts. We then compared the MLU rates and factors to those included in FPRAs and forward pricing rate recommendations in effect at the time the MLU contracts were negotiated. Where differences were identified, we determined the effect on contract prices. We performed similar work on the Honeywell subcontract. We discussed the rates and factors with contractor, Air Force, DCAA, and Defense Plant Representative Office officials. To determine how Air Force officials used DCAA audit recommendations in negotiating prices for the prime contracts, we reviewed the DCAA preaward audit reports and recommendations. We evaluated contract negotiation records to determine how Air Force negotiators used DCAA’s work in establishing negotiation objectives and negotiating the contract prices. We discussed the use of the audit recommendations with DCAA and Air Force officials. To determine whether subcontract and material costs included in the contract prices were fair and reasonable, we compared the pricing safeguard techniques used by the contractors with those required by the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement. We verified that, when required, the contractors obtained cost or pricing data, conducted cost or price analyses, carried out negotiations with subcontractors and vendors, and obtained certificates of current cost or pricing data. We also determined whether DCAA or audit agencies of the European participating governments made audits of the subcontractor price proposals. In addition, we examined negotiation records for the subcontracts and material items and discussed them with contractor and Air Force officials. We performed our work between May and August 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force; the F-16 System Program Director; the Director, Defense Contract Audit Agency; the Commander, Defense Contract Management Command; and the Chief Executive Officers of Lockheed Martin and Northrop Grumman Corporations. Copies will be made available to others upon request. If you or your staff have questions about this report, please contact me at (202) 512-4841 or David E. Cooper at (202) 512-4587. Major contributors to this report are listed in appendix IV. Joe D. Quicksall, Assistant Director Jeffrey A. Kans, Evaluator Kimberly S. Carson, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the pricing of selected contracts and subcontracts awarded under the F-16 Aircraft Mid-Life Update (MLU) Program, designed to develop, produce, and install upgrades to F-16 fighter aircraft owned by Belgium, Denmark, the Netherlands, and Norway, focusing on: (1) differences between the rates and factors used to price two selected prime contracts and those used to price contemporaneous U.S. contracts; (2) how the Air Force used Defense Contract Audit Agency (DCAA) recommendations in negotiating prime contract prices; and (3) whether the prime contracts' prices for material and subcontract costs were fair and reasonable. GAO found that: (1) the prime contractors proposed and Air Force negotiators accepted rates and factors to price the two MLU contracts that were different from those used to price contemporaneous U.S. government contracts; (2) the contract prices for the European participating governments were $9.4 million higher due to the use of different rates and factors; (3) the Defense Plant Representative Office Commander certified that the forward pricing rate agreement (FPRA) rates and factors used to price the Lockheed Martin MLU contract were the same as those used to price all other contracts awarded to Lockheed Martin during the effective period of the agreement; (4) despite this certification, a special set of higher rates and factors was used to price the MLU contract rather than those called for in the FPRA; (5) for the Northrop Grumman contract, Air Force negotiators used a general and administrative overhead rate established for use in pricing foreign military sales rather than a lower domestic rate established for pricing U.S. government contracts; (6) Air Force negotiators also used two incorrect rates in pricing the MLU contract; (7) DCAA conducted preaward audits of the prime contractors' price proposals, questioned various costs, and reported large amounts of unresolved costs because audits had not been made of several subcontractor price proposals; (8) except for the rates and factors used for the Lockheed Martin contract, Air Force negotiators used DCAA's audit results to assist them in negotiating lower prices for the prime contracts; (9) Lockheed Martin and Northrop Grumman employed safeguard techniques required by U.S. procurement regulations to evaluate and negotiate subcontract and material prices for the prime contracts, and Air Force negotiators accepted the proposed and negotiated subcontract prices as fair and reasonable; (10) there are indications that material in the two prime contracts may be overpriced by as much as $947,000; (11) as for the two subcontracts selected by the European countries' Supreme Audit Institutions for review, Lockheed Martin awarded the Hazeltine subcontract competitively and the Honeywell subcontract noncompetitively; (12) in negotiating the price of the Honeywell subcontract, Lockheed Martin used rates and factors recommended by the cognizant U.S. government contract administration activity and employed the required safeguard techniques; and (13) the Air Force accepted the prices of these two subcontracts as fair and reasonable.
VA began providing formal treatment for alcohol dependency in the late 1960s and treatment for drug dependency in the early 1970s. According to VA, the guiding principle behind its national substance abuse treatment program has been the development of a comprehensive system of care for veterans. In accordance with this principle, VA has developed a network system of care that is supposed to afford veterans access to facilities offering a range of substance abuse treatment services, including inpatient, residential, and ambulatory care. VA requires its medical centers to maintain quality assurance programs so that veterans receive quality care. Such care is defined as the degree to which health services increase the likelihood of desired health outcomes and are consistent with current professional knowledge. Quality assurance programs measure whether quality care is provided and use performance indicators to measure whether established standards have been met. VA’s substance abuse treatment programs serve a population characterized as psychologically and economically devastated. For example, in fiscal year 1995, nearly one-half of veterans in substance abuse treatment inpatient units were homeless at the time of admission, and 35 percent had both substance abuse and one or more psychiatric disorders. In addition, veterans treated in substance abuse treatment units were chronically unemployed, had problems maintaining relationships, reported low incomes, or were criminal offenders. In fiscal year 1995, VA treated 57,776 veterans in inpatient substance abuse treatment units and 121,812 veterans in outpatient substance abuse treatment units (see table 1). About 70 percent of these veterans were eligible for VA health care because of their low incomes rather than because of a service-connected disability. More than 50 percent of the veterans were Vietnam War-era veterans and another 25 percent served after that time. Only 6 percent of the inpatients and 9 percent of the outpatients had a service-connected disability of 50 percent or more. Characteristics of veterans treated in inpatient and outpatient substance abuse treatment units differed somewhat from veterans treated in VA’s medical and surgical units. Veterans in the medical and surgical units were older than those in the treatment units. Their median age was about 59, compared with veterans in all substance abuse treatment units, whose median age was 43. Furthermore, more veterans in medical and surgical units were eligible for VA treatment because of their service-connected disability than were veterans being treated in substance abuse treatment units. About 34 percent of the inpatients and 47 percent of the outpatients seen in medical and surgical units had a service-connected disability, compared with 25 percent and 31 percent, respectively, for veterans in all substance abuse treatment units. VA strives to offer a continuum of services to treat veterans nationwide with substance abuse disorders. Since fiscal year 1990, VA has used additional funds to expand the number of substance abuse treatment programs, patients treated, and staff. The additional funds, accompanied by an increased emphasis on outpatient treatment, have resulted in significantly increasing the number of outpatients served at VA medical centers. VA operates 389 substance abuse treatment programs at more than 160 medical centers throughout the United States and Puerto Rico. These programs include 203 inpatient or extended-care programs, 152 outpatient programs, 22 methadone maintenance clinics, 9 residential rehabilitation programs, and 3 early intervention programs. Typically, these medical centers provide a combination of treatment settings, incorporating inpatient or extended-care programs, outpatient clinics, and residential rehabilitation programs. VA provides most substance abuse programs directly. However, it does rely on some non-VA facilities, such as community residential facilities, to provide some services. Figure 1 shows the locations and types of VA substance abuse programs provided as of October 1, 1994. Like other providers, VA uses a variety of approaches in treating veterans with substance abuse disorders. Table 2 describes the treatment approaches used in VA programs. As part of the President’s national drug policy program, VA received $105 million annually in recurring funds in fiscal years 1990 to 1993. VA used these funds to expand substance abuse treatment services to more eligible veterans. The additional funds and emphasis on outpatient treatment resulted in significantly increasing the number of outpatients served at VA medical centers. As shown in figure 2, obligations for VA substance abuse treatment programs increased about 45 percent, from $407 million to $589 million from fiscal years 1991 to 1996. As shown in figures 3 and 4, the number of inpatients and inpatient programs has remained fairly stable over the years; the number of outpatients and outpatient programs has grown significantly, however. According to VA, the number of inpatients served in VA substance abuse treatment units declined slightly from 58,500 to 55,200 patients in fiscal years 1988 to 1995. The number of outpatients in substance abuse treatment in those same fiscal years rose dramatically, however, from 38,300 to 68,300 patients—about a 78-percent increase. A similar trend has occurred in the number of inpatient and outpatient treatment programs. The number of inpatient programs increased from 174 to 180 (about 4 percent) between fiscal years 1991 and 1994. However, the number of outpatient programs increased from 111 to 152—about a 37-percent increase. Traditionally, medical center directors determined the extent to which their centers offered substance abuse treatment services. This may change, however, under the VISN structure. The VISN directors, who are accountable to the Under Secretary for Health for their VISNs’ performance, are charged with providing coordinated services for all eligible veterans living within their network areas. Although VISN directors and the respective medical center directors have discussed possible changes to the substance abuse treatment programs, no changes had yet been made during the time of our study. On the basis of discussions with VA officials, however, some current programs will likely be consolidated and others will likely change focus. VA currently lacks the necessary data to adequately measure and fully evaluate the efficacy of its many treatment programs. VA is therefore developing a new performance monitoring system, using new outcome measures, to compare treatment and program effectiveness both internally and with non-VA substance abuse treatment providers. VA’s efforts compare with outcome measurement approaches used by non-VA providers of substance abuse treatment services. Substance abuse treatment staff at VA medical centers monitor program quality through the accreditation process and internal studies. VA medical center substance abuse treatment programs must meet the standards promulgated by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). Through its review process, JCAHO determines whether each medical center has the necessary programs in place that should result in good care. In addition, medical centers have instituted quality improvement programs, in part to satisfy accreditation requirements, using a variety of measures. The medical centers we visited track readmissions, length of stay, and patient satisfaction. At the VA medical center in Denver, for example, recidivism rates have been monitored since 1988. At a VA medical center in Chicago, discharged inpatients are monitored to determine whether they show up for outpatient follow-up care. VA’s quality management philosophy and staffing resources have constrained the central office staff’s monitoring role. Central office officials have primarily played a consultant role on quality assurance matters. This role has been based on VA’s philosophy that, because care takes place at the medical centers, staff at the centers are the best suited to monitor their programs and take the appropriate actions to improve care. Central office officials do, however, monitor the many substance abuse treatment programs by reviewing (1) annual reports on the substance abuse treatment programs at each medical center; (2) reports on program services, staffing, and utilization from VA’s Program Evaluation and Research Center; (3) the Quality Improvement Checklist, a systemwide quality improvement tool that includes one indicator about the rate of readmission for alcohol- and drug-related disorders for patients discharged from inpatient substance abuse treatment units; and (4) the results of patient satisfaction surveys. These officials also work with staff from the Center for Excellence in Substance Abuse Treatment and Education to test models of care, help identify best practices, train students, and provide continuing education in substance abuse treatment. Except for the Center’s reviews, however, none of these reviews focuses on the outcomes of the specific treatments provided. In November 1995, in a shift in philosophy, VA central office officials proposed a systemwide approach to quality management using a variety of performance indicators, including treatment outcome measures. Believing substance abuse to be a chronic disease that frequently recurs, VA has dropped two previously used indicators, recidivism and discharge disposition, because staff felt that these indicators did not adequately measure program success. The new indicators will rely on data currently collected but not aggregated. Three indicators relate to the number of veterans starting substance abuse treatment programs and visiting outpatient units. Two indicators compare the number of patients in and visits to outpatient substance abuse treatment units with the number of all patients in and visits to these units as well as the number of patients in all VA substance abuse treatment units as a percentage of the total number of patients in care. In the future, VA plans to develop other performance indicators based on data not currently available to assess treatment effectiveness. These indicators will be based on data collected through a standardized data collection instrument, the Addiction Severity Index (ASI). The indicators will measure treatment outcomes that include changes in medical status, employment, alcohol use, drug use, criminal activity, family and social relationships, and psychiatric symptoms. VA is considering administering a comprehensive ASI to all patients within 3 days of entering any substance abuse treatment setting and then annually while the patient remains in treatment. An abbreviated ASI would be administered after 1 month and again after 6 months of treatment. Although both VA and non-VA substance abuse treatment officials agree that patient data collected through the ASI would be useful in determining the proper treatment and its efficacy, some are concerned that it may be too expensive and time consuming to administer. The revised performance measures will be used to evaluate individual substance abuse treatment programs and compare them with each other as well as with non-VA programs. For example, VA is already piloting a performance monitoring system developed by its Program Evaluation and Research Center. The system ranks, according to cost and utilization data, the relative performance of mental health and substance abuse units among the medical centers and 22 VISNs. To ensure that the comparisons fairly assess program performance, VA intends to account for veteran characteristics, such as other coexisting medical or psychiatric diseases, that might affect the outcome of the substance abuse treatment. VA’s current and planned initiatives to monitor program performance compare with those used or planned by non-VA providers and managed behavioral health care organizations we contacted. For example, one large managed behavioral health company that has used outcome measures since 1993 collects information about readmission, complaints, and patient and provider satisfaction, among other data. A large local provider had no systematic outcome measurement efforts under way at the time of our study, but it would provide data for requested state or federal studies. Such data might include detoxification use, employment, housing, and treatment service use. Comparisons of VA’s programs with publicly supported non-VA substance abuse programs should be possible once VA’s various programs’ treatment outcomes are known and the data are properly adjusted to account for any differences in patient characteristics. Non-VA substance abuse providers and programs are also available to and used by veterans. In Colorado, for instance, approximately 400 facilities that receive some public funding to treat patients with low incomes served five times the number of veterans treated at the Denver VA medical center in fiscal year 1995. The 10,000 veterans treated by state-funded facilities in Colorado represent about 18 percent of the patients seen at the facilities. Similarly, in Illinois, we found that 8,200 patients, about 8 percent of those treated in facilities receiving state funds, were veterans. According to VA officials and officials of the non-VA programs we visited, veterans who qualify for publicly supported treatments are like those treated at the VA medical centers. For example, in Colorado and Illinois, we found that the veterans treated by state-funded providers have low incomes and high levels of unemployment; many were homeless. Moreover, the vast majority of the veterans were male—97 percent in both Colorado and Illinois—and most did not have insurance. Although non-VA providers told us they were willing to treat more veterans, they currently do not have enough staff to do so. Therefore, these providers would need additional funding to hire staff capable of treating a significant number of low-income veterans with multiple problems. The number and health status of eligible veterans, potential demand for substance abuse treatment services, and the cost of specific programs are just some of the data needed to determine the implications of changing VA’s service delivery methods. However, VA currently has neither this information nor the systems in place to gather it. This situation and the decisions VISN directors might make about what and where services will be offered make it difficult to estimate the effects of VA’s changing its current delivery structure. One possible change to VA’s services you asked us to explore is VA’s reducing its substance abuse treatment program. If VA were to stop treating veterans for substance abuse, societal costs would likely increase. Researchers have indicated that the costs of treating people with substance abuse disorders tend to shift to other sectors, including welfare and other social services, other medical providers, and the criminal justice system, when people go untreated. Although we expect that many of VA’s substance abuse patients would qualify for publicly supported treatment programs if VA ended its services, VA officials told us that some veterans would surely “fall through the cracks.” These officials are concerned about the uneven distribution of care now provided through state-assisted programs and about how VA patients would fare in a managed care environment. You asked us to look at the implications of VA’s contracting out for substance abuse treatment services instead of eliminating or reducing the number of such services. The implications of this approach to VA and the community are difficult to determine at this time. VA lacks information on the health care needs of eligible veterans, the number of veterans who might seek care if it were more accessible, the actual cost of treating such veterans, and the outcomes of specific treatments. Before contracting out substance abuse treatment services, VA would have to better understand its patients, treatment outcomes, and costs. Only then could it define a number of key contractual elements, such as the type of service delivery model preferred, the actual services it would and could afford to cover, the treatment philosophy to be employed, responsibilities for program monitoring, and the distribution of financial risks. The lack of this information limits our ability to evaluate the cost-effectiveness of contracting out program services and the implications of this action on the relative quality of services veterans might receive. VA reviewed a draft of this report and commented that it was a fair and accurate assessment of its substance abuse program and the initiatives it has under way. This report was prepared under the direction of Sandra Isaacson, Assistant Director; Tom Laetz; Mary Needham; and Bill Temmler. Should you have any questions, please call me at (202) 512-7111 or Sandra Isaacson at (202) 512-7174. Stephen P. Backhus Associate Director Veterans’ Affairs and Military Health Care The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) substance abuse program and the effect of VA reorganization on this program, focusing on: (1) characteristics of veterans who receive substance abuse treatment; (2) services VA offers to veterans with substance abuse disorders; (3) methods VA uses to monitor the effectiveness of its substance abuse treatment programs; (4) community services available to veterans who suffer from substance abuse disorders; and (5) implications of changing VA methods for delivering substance abuse treatment services. GAO found that: (1) in fiscal year 1995, VA substance abuse treatment units served about 180,000 veterans; (2) about one half of the inpatients were homeless at the time of admission and about one third had psychiatric disorders; (3) many of these veterans were chronically unemployed, had problems maintaining relationships, reported low incomes, or were criminal offenders; (4) VA provides a variety of treatment settings and approaches; (5) between fiscal years 1991 and 1996, VA funding for treatment increased from $407 million to $589 million to accommodate growth in the substance abuse treatment program; (6) VA lacks the necessary data to adequately measure and fully evaluate the efficacy of its many treatment programs and has primarily relied on utilization information and recidivism rates to monitor the quality of its substance abuse treatment programs; (7) VA is developing a performance monitoring system based on treatment outcome measures; (8) numerous non-VA substance abuse treatment programs are also available to and used by veterans; (9) many veterans treated in community-based public programs are like those treated in VA programs; (10) if VA stopped treating veterans for substance abuse, resulting societal costs may shift to welfare or other social services, other federal or state substance abuse treatment programs, and the criminal justice system; (11) VA cannot ascertain the implications of contracting for these services, since it lacks critical information on the health care needs of eligible veterans, the number of veterans who might seek care, and actual cost of treating veterans with substance abuse disorders; and (12) VA officials have not decided how substance abuse treatment services will be delivered and what outcome measures will be used to evaluate treatment and program effectiveness.
Prior to restructuring, the electricity industry in California was organized around three regulated monopoly utilities, which were responsible for ensuring that electricity demand and supply were balanced at all times in order to maintain a reliable electricity system. The utilities owned and operated the electricity generating facilities as well as the electricity transmission system (i.e., the actual wires that carry electricity from generators to final consumers). The utilities sold electricity to consumers at prices determined by the state’s Public Utilities Commission—a state regulatory agency. Charges to cover the costs of generating the electricity as well as the costs of maintaining and operating the transmission system were included in the retail electricity prices set by the commission. Utilities were allowed to earn a “normal rate of return” on all approved capital expenditures required to build generating facilities and the transmission system itself. Seeking to improve efficiency and reduce electricity prices, California began restructuring its electricity market during the 1990s. As part of the state’s restructuring plan, the utilities were encouraged to sell much of their generating capacity to private companies. This divestiture was intended to increase the number of competitors in the wholesale electricity market. The plan also set up the California Independent System Operator (CAISO), a private nonprofit corporation charged with managing the transmission system in the state and balancing demand and supply to ensure reliability of the system. Under the plan, the utilities would still own some generating capacity and own and maintain the transmission system. In the restructured market, which formally opened on April 1, 1998, private generators were able to sell electricity to the utilities through the newly created California Power Exchange in daily and hourly auctions. The power exchange was intended to be the primary market for wholesale electricity sold in the state. To ensure that the power exchange was a competitive market with many suppliers, the Public Utilities Commission required the utilities to sell their remaining generating capacity into the power exchange market. The Public Utilities Commission also limited the utilities’ ability to enter into long-term contracts to purchase electricity wholesale, which effectively required them to purchase almost all of their electricity needs from the power exchange. As a result of these actions, most electricity purchases occurred in the short term—generally, at most one day ahead of when the electricity was needed. Under restructuring, retail prices were frozen, while wholesale prices were to be determined by market conditions of demand and supply. In an attempt to ensure that consumers received some immediate benefits from restructuring, California’s restructuring legislation required retail prices be frozen for up to 4 years at a level 10 percent below the prices that were in effect immediately prior to restructuring. Policy makers anticipated that the reduced retail prices would be higher than wholesale prices and would therefore allow the utilities to continue to recover costs they incurred in the old regulated market and that had not yet been recouped. Wholesale prices were determined in the power exchange market, and the CAISO bought some electricity near the last minute to maintain a precise balance between demand and supply. FERC’s authority was unchanged; the agency continues to monitor the functioning of wholesale markets and retains responsibility for ensuring that wholesale prices are just and reasonable. For the first two years of California’s restructured market, overall wholesale prices were fairly low, averaging about $33 per megawatt-hour (MWh) compared with the frozen retail prices, which were set at about $65 per MWh. However, overall wholesale prices rose significantly in May 2000 and remained very high through May 2001. These overall prices reached an all-time peak in December 2000 of $317 per MWh. Figure 1 shows the monthly average overall prices since April 1998, when the restructured market began operating, through February 2002. As we reported in June 2001, average wholesale prices of electricity sold through the power exchange during the months of May through December 2000 were between 2 and 13 times higher than prices in the same months of the previous year. In addition, there were frequent periods, especially in the winter of 2000-2001, when the electricity system was in danger of service disruptions, and there were a number of days when rolling blackouts occurred. During this period, two of the state’s three major utilities became insolvent and were unable to pay for their purchases of electricity. The California Power Exchange ceased doing business in January 2001 and later declared bankruptcy, as the state assumed responsibility for buying electricity on behalf of the utilities. (See app. II for a timeline of key events occurring in the California electricity market.) Various factors have been cited as contributing to California’s high electricity prices. Increased demand for electricity combined with a shortage of supply created increased scarcity beginning in May 2000. As we noted in our June 2001 report, demand for electricity rose by as much as 13 percent from 1995 through 2000, while supply growth did not keep pace.Imported electricity from the Pacific Northwest, which California depended upon to meet its needs, was less available because an extremely dry winter in 2000 had reduced hydropower generation. In addition, imports from southwestern states were less available because higher-than- normal temperatures increased electricity demand in those states. Costs of generating electricity rose in 2000. In particular, prices of natural gas—the fuel used to generate as much as 40 percent of California’s electricity—rose in 2000 compared to 1998 and 1999 prices. Also the costs of emissions permits, which some generating plants are required to own in order to operate, also rose in 2000. California state officials and others cited the exercise of market power by suppliers as a cause of the dramatically higher prices. California officials attempted to mitigate high wholesale electricity prices in several ways. The CAISO attempted to control wholesale price increases by using a price cap to limit the maximum price it would pay for electricity it purchased. During the summer of 2000, this maximum price was lowered twice: from $750 to $500 per MWh in July and again from $500 to $250 in August. Despite the caps, overall wholesale electricity prices remained higher than in the previous two years. As a result, California requested assistance from FERC. In December 2000, FERC implemented its own mitigation strategy that capped wholesale prices at $150 per MWh, but allowed suppliers to charge higher prices if they could demonstrate to FERC that their costs of generating the electricity exceeded the price cap. Even after FERC’s actions, prices remained much higher than normal through May 2001. In June 2001, FERC implemented a region-wide price cap that effectively limited prices to a maximum of about $92 per MWh in all western states. The state also took steps to expedite the siting of new power plants, promote energy conservation (in part by raising retail prices), and to negotiate long-term contracts with electricity suppliers. In June 2001, overall wholesale prices fell dramatically and continued to decline for several more months, eventually dropping to about $40 per MWh as of December 2001. Numerous reasons have been advanced for these decreasing prices, including the price mitigation efforts by the state and FERC. In addition, demand was lower because of moderate weather conditions, and electricity-generating costs fell due to lower costs for purchasing natural gas. Despite the current moderate electricity prices and estimates of enough electricity to meet the state’s needs for the summer of 2002, California’s electricity market faces an uncertain future for a number of reasons: (1) FERC’s region-wide price caps are scheduled to expire September 30, 2002; (2) CAISO is still in the process of redesigning the electricity market and will seek approval from FERC for their new design; and (3) California officials are attempting to renegotiate many of the long-term contracts they signed with wholesale electricity suppliers in early 2001 because the prices they negotiated are much higher than current market prices. Further, as we recently reported, many proposals for new power plants in California have been cancelled because of factors such as the national economic slowdown; lower electricity prices; and the increased risk of entering a market where the market design and rules are uncertain. Our analysis found that electricity suppliers exercised market power by raising prices above competitive levels during some periods after the restructured market opened. In particular, we found that in parts of 2000, electricity prices did not follow the usual pattern of rising during the high- demand hours and falling during low-demand hours—rather, the highest prices were not found in the hours of highest demand. In addition, numerous studies conducted by prominent economists and other industry analysts also found evidence that individual suppliers exercised market power by raising their prices above competitive levels during certain periods. In explaining the high prices, the studies pointed to other factors as well, such as environmental constraints on some generators, higher fuel costs, and a generally tighter supply-demand balance, which increased suppliers’ costs and contributed to relatively scarce supply during 2000. Table 1 summarizes our findings and the results of the other studies. To determine whether there was evidence that wholesale electricity suppliers exercised market power in California, we evaluated data from August through October 1998—a period of relatively low wholesale prices—to establish a competitive baseline relationship between wholesale electricity prices and the level of demand. We selected this baseline period because previous studies indicated that prices during the period were, for the most part, competitive. Then we compared the baseline to the period from August through October 2000, when wholesale prices were on average much higher, to determine whether the pattern of prices was consistent for comparable situations in the two periods. Our analysis used price data from the power exchange market and demand data from CAISO. Under competitive conditions, prices of electricity are expected to follow a pattern in which high prices correspond to hours of the day in which demand for electricity is high and low prices correspond to low-demand hours. This pattern occurs because competitive electricity prices reflect the changing costs of producing electricity. The competitive price of a MWh of electricity is equal to the additional amount it would cost to generate an additional megawatt-hour, once all current demand is met. This additional cost is commonly referred to as the marginal cost. The marginal cost of generating electricity rises as more electricity is produced, because different generators use different types and amounts of fuel. For example, hydroelectric and nuclear generating plants have very low fuel costs, while natural-gas-burning plants have higher fuel costs. Generating plants with low marginal costs generally operate during more hours of the day than those with higher marginal costs—the highest-cost plants operate only during the very highest demand hours and may even sit idle most of the year. Therefore, under competition, the rising marginal cost of electricity leads to high prices when demand is high and low prices during low-demand periods. Figure 2 shows actual average demand and prices for different hours of the day from August through October 1998, the period that we used as our baseline. As the figure shows, prices are generally lower during low- demand hours and higher during high-demand hours. Figure 3 illustrates the price and demand patterns we observed during the baseline period compared to those for the period from August through October 2000. In comparing this baseline relationship to the prices and demand observed from August through October 2000, we found that average prices were much higher during the 2000 period than in the baseline. Other studies attributed part of this increase in prices to the exercise of market power. In addition, we found that the relationship between prices and demand observed during the 2000 period was not consistent with what would be expected under competitive conditions. Specifically, during the period analyzed, average prices during the heaviest demand hours were actually lower than in surrounding, lower-demand hours. Figure 3 shows that the hours of highest demand—1 p.m. through 4 p.m. and 5 p.m. through 8 p.m.—did not correspond to the highest average price, as would be expected under competitive conditions. Instead, the highest average prices came in the lower demand hours of—9 a.m. through 12 p.m. and 9 p.m. through 12 a.m. For example, in the highest demand hours, 1 p.m. through 4 p.m., demand averaged about 33,300 MWh, and the price averaged about $164 per MWh. In contrast, during the hours, 9 p.m. through 12 a.m., demand averaged about 27,600 MWhs and price averaged about $182 per MWh. We discussed the patterns of prices with CAISO staff and reviewed studies to try to explain why the prices did not follow the expected pattern in August through October 2000. According to CAISO staff, during this period, some suppliers increased the proportion of their electricity generation that they sold directly to CAISO, as opposed to selling in the power exchange as the market design intended. CAISO staff explained that, during this period, CAISO purchased some electricity at the last minute at prices above the prevailing price cap in order to keep demand and supply in balance and avoid blackouts. As a result, staff said, in-state suppliers withheld some of their electricity from the power exchange market and waited to sell at higher prices at the last minute. One recent study of the California electricity market concluded that suppliers knew that CAISO was unwilling to allow blackouts, even when prices were very high. Therefore, the study concluded, the price cap created a game of “chicken” between suppliers and CAISO. Suppliers would wait until the last minute to sell their electricity, in an attempt to see how much the increasingly desperate CAISO would pay. The supplier behavior described by CAISO staff and in studies we reviewed was consistent with the exercise of market power, because the prices charged did not reflect the marginal costs of generating additional megawatt-hours of electricity. Rather, the behavior reflected an ability to charge higher prices by waiting to commit the generation to a time when buyers were willing to pay more. Other studies we reviewed analyzed the marginal costs of generating electricity during this period and concluded that these costs were well below the market prices we observed. In addition to discussing our findings with CAISO staff and reviewing other studies, we examined other market data to try to explain the unexpected pattern of demand and prices observed from August through October 2000. We found that the supply of electricity was scarcer during that period than during the baseline period, owing in large part to a reduction in available imports of electricity from other western states. While greater scarcity can explain generally higher prices, it does not explain the pattern of prices we found. Even when electricity becomes increasingly scarce, prices should be higher during high-demand periods than when demand is lower, unless suppliers can, through the exercise of market power, affect prices in lower-demand periods. Therefore, based on our analysis and our review of other studies, we believe this pattern of prices demonstrates that suppliers exercised market power by raising prices above their marginal cost. Because we did not have specific cost data and could not accurately measure the role of scarcity in determining prices, we were unable to isolate the relative role of market power in causing the high prices found during 2000. The authors of the other studies discussed in table 1 found that suppliers exercised market power by raising prices above marginal costs of generating electricity. Although these authors used different methodologies and studied varying time frames, they all reported that the exercise of market power was a key factor contributing to higher prices in the California market. The reported effect that market power had on prices varied: while the California State Auditor made no specific estimate, the authors of the Berkeley/Stanford study attributed as much as 51 percent of the price increases in the summer of 2000 to market power. The CAISO-B study concluded that $6.2 billion in higher electricity prices resulted from the exercise of market power by electricity suppliers during May 2000 through February 2001. Additional details of these studies are presented in appendix III. The authors of the studies reported that other factors besides market power, such as increased production costs and a tight supply-demand balance, also contributed to higher electricity prices in California. Some of the authors noted that higher production costs contributed to higher prices in California during 2000 compared with earlier years. These costs included costs to purchase natural gas, which had increased in price, and costs of emissions permits required for some generators to allow them to operate. Some of the authors reported that tight demand and supply balances also affected prices. Demand increased because of unusually hot weather, while supply was scarcer because of the reduced availability of electricity imports from other states and a lack of new electricity generation in California in the preceding years. California’s market design enabled wholesale electricity suppliers to exercise market power. According to prominent experts and analysts, two principal market design flaws increased suppliers’ ability to raise prices above competitive levels: (1) retail prices were frozen, and (2) with few exceptions, the Public Utilities Commission limited utilities’ ability to enter into long-term contracts with suppliers. In addition, we found that California’s market design lacked effective price mitigation strategies to be used once exercise of market power was suspected. A provision of California’s restructuring legislation froze retail prices for consumers for 4 years or until the utilities recovered certain costs incurred under the prior regulated market. Numerous authors of studies of the California electricity market, including those studies discussed previously in this report and others, noted that the retail price freeze meant that consumers in California did not reduce their use of electricity when prices began to rise in May 2000. Economists and other market design experts commonly recognized that such insensitivity to price changes is a key factor that enables suppliers to raise prices above competitive levels under tight supply conditions. Therefore, the frozen retail prices in California created a situation in which suppliers could charge high prices during some periods without worrying that consumers would reduce their use of electricity. With few exceptions, the California Public Utilities Commission severely limited the utilities’ use of long-term contracts until after electricity prices increased in the summer of 2000. The Congressional Budget Office notes that in California, as much as 50 percent of electricity purchases occurred immediately before electricity was needed to meet demand, compared to 10 to 20 percent in other states that had restructured their electricity markets. Economists and other market design experts recognize that when suppliers have signed long-term contracts to sell much of their capacity at pre-determined prices, they have a much smaller incentive and ability to exercise market power. For example, authors of a study of the June 2000 price increases in California concluded that if the utilities had signed long-term contracts for their expected demand for the months of May and June 2000, average prices in the power exchange would have been significantly lower. FERC also reported in November 2000 that flawed market rules—especially frozen retail prices and limited long-term contracts—contributed to unusually high prices in the summer of 2000 in California. These studies and others concluded that the absence of such contracts between California’s utilities and wholesale suppliers created conditions under which these suppliers could and would exercise market power. In the course of our analysis, we found that the CAISO’s use of price caps was ineffective in mitigating high prices and bringing them down to competitive levels in 2000. Other studies and expert opinion also concluded that these price caps did not work, in part because they only applied to the state of California—when prices in surrounding states were higher than the CAISO’s price cap, wholesale suppliers naturally tried to sell to the highest location, which led to problems getting needed electricity into California. Our statistical analysis indicates that the CAISO price caps were ineffective in bringing prices down; in fact, when they lowered the price cap from $750 to $500 per MWh and again to $250, average prices rose.Specifically, prices during May and June—when the $750 price cap was in place—averaged about $93 per MWh. During the period in which the $500 cap was in place—July1 through August 6—prices rose, averaging about $143 per MWh. When the price cap was lowered again to $250, prices again rose, averaging about $164 per MWh from August 7 through October 31. Our analysis does not allow us to say whether the price caps caused the increase in average prices, or if so, explain why that happened, but it is clear that they were not effective in bringing prices down to competitive levels. We reviewed studies and interviewed experts to try to determine why the price caps were not effective. There was general agreement that one major flaw in the design of the price caps was that they did not apply to the entire western region. As one expert put it, “California is part of a larger western electricity market, and as a result, the CAISO price cap created an incentive for suppliers to sell electricity outside of California whenever prices were higher in surrounding states.” Another study concluded that the implementation of the price cap was also flawed. The author of the study noted that the CAISO did not commit to keeping a firm price cap, because it was unwilling to impose blackouts on customers even when prices increased a great deal. As a result, the author said, the CAISO was put in a very weak position when it came to negotiating prices for electricity at the last minute and suppliers were able to drive up prices above the cap level. The CAISO told us that as part of the design of California’s restructured electricity market, the CAISO had limited authority to mitigate high prices when it found they were caused by the exercise of market power. This authority was largely limited to imposing prices caps on what the CAISO would be willing to pay for electricity from in-state suppliers. These caps did not apply to electricity purchased from out-of-state suppliers. Moreover, if prices outside the state rose above the California price cap, then in-state suppliers would have an incentive to export electricity, thereby making electricity scarcer and placing a greater burden on the CAISO to purchase more electricity at the last minute to balance demand and supply, sometimes at prices above the price cap. As a result, capping prices in the state was ineffective in bringing down the total expenditures on electricity. FERC also implemented a mitigation plan in December 2000. FERC’s mitigation plan reduced the price cap from $250 to $150, but allowed sellers to receive higher prices for their electricity if they could justify the higher prices by demonstrating that their costs of generating or acquiring the electricity were higher than $150 per MWh. As mentioned previously in this report, prices remained relatively high throughout the winter of 2000 and 2001 despite FERC’s mitigation efforts. While it would appear that FERC’s December mitigation plan was not effective in bringing prices down to competitive levels, there were other confounding changes in the market environment, including sharp increases in natural gas prices and increasing financial difficulties of the state’s three largest utilities, that make it difficult to isolate the impact of FERC’s actions on prices. Therefore, we were unable to evaluate the effectiveness of FERC’s mitigation strategy. As discussed in this report, a number of factors caused electricity prices to rise in California in the summer of 2000 and at other times since restructuring. Based on our analysis and studies by prominent economists and other market analysts, the exercise of market power by wholesale suppliers was clearly one of those factors explaining the high prices. Further, the design of the California electricity market created almost textbook conditions under which market power would be expected to exist. As a result, electricity suppliers could withhold electricity from the market until it was critically needed, and at that time, could raise prices above competitive levels. Attempts by the CAISO to mitigate the resulting high prices during 2000 were unsuccessful due to inadequacies in design and implementation of the mitigation strategies. This experience in California highlights the importance of properly designing competitive electricity markets and the need for effective mitigation when restructured markets do not perform as expected. To determine whether wholesale suppliers of electricity exercised market power, we examined and analyzed market data on generation, demand, and prices of electricity in California from April 1998 through October 2000. We did not analyze the period after October 2000 because there were many changes to the market, that beginning in November 2000 made it difficult to determine what competitive prices should be. Among these changes were sharp increases in natural gas prices, increasing financial difficulties for the state’s two largest utilities, and the eventual closure of the power exchange. The data we used came from the California Power Exchange, CAISO, and California Energy Commission. We performed statistical analyses to determine whether there were changes in the pattern of prices across different levels of demand during the summer and early fall of 2000. We also assessed other possible explanations for the high electricity prices experienced during 2000, including increased scarcity of supply, higher than normal demand, natural gas fuel costs faced by sellers, and the reduced availability of imports of electricity from other states. Appendix I contains a complete discussion of our methodology and analysis. In addition, we evaluated numerous other studies to determine what other analysts had concluded about the existence and extent of market power in the California electricity market. In particular, we focused on five studies that covered a range of time periods and methodological approaches and that addressed directly at least one of our objectives. A full bibliography of the studies we reviewed is contained in appendix IV. We did not have sufficient data to evaluate whether individual companies exercised market power, or to determine how much of the high prices experienced in California was the result of market power versus other factors that may have led to tighter demand and supply balances or to higher costs of generating electricity during this period. To determine what role, if any, the design of California’s market played in facilitating suppliers’ ability to exercise market power, we evaluated the CAISO’s price mitigation efforts. Specifically, we performed a statistical analysis of the relationship between prices and levels of demand, controlling for the various price caps imposed by the CAISO. We compared the periods in 2000—during which the CAISO lowered its price caps twice, and prices were generally high—with previous periods in 1998 and 1999, during which prices were generally lower. We also reviewed numerous studies by academics, industry analysts, and government agencies. Further, we interviewed academics, industry experts, industry participants, and officials from state and federal government, including the CAISO, California Energy Commission, Electricity Oversight Board of California, California Public Utility Commission, and FERC. In addition, where applicable, we applied established economic concepts and theories to predict the likely effects of the CAISO’s market power mitigation plan on prices and the supply of electricity. We evaluated as well, FERC’s market power mitigation plan, implemented in December 2000. Data limitations precluded us from evaluating the effectiveness of the FERC plan, but we reviewed academic studies that discussed FERC’s mitigation methodology. We conducted our work from July 2001 through May 2002 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, the Federal Energy Regulatory Agency, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. This appendix provides a detailed discussion of the analysis we used to determine whether market power existed in California’s electricity market in 2000 and also to estimate the impact of the CAISO’s changes to price caps in July and August 2000. We conducted econometric analyses using data on market prices in the forward electricity market operated by the California Power Exchange and demand data from the California Independent System Operator (CAISO). We also evaluated data on imports, reserve electricity purchases, and other purchases at the last minute by the CAISO to balance demand and supply. In summary, the results of our econometric analysis are inconsistent with the absence of market power and are not fully explained by other factors, such as increased scarcity of supply or increases in costs of generating electricity. Therefore, we have concluded that market power played a role in the high prices experienced in California in 2000. Because we do not have sufficient data to allow us to measure costs or scarcity with precision, we could not estimate the extent to which the high prices were attributable to these factors versus market power on the part of wholesale suppliers. We also found that the price caps imposed by the CAISO in an effort to mitigate market power were ineffective in reducing average prices. In particular, we found that when price caps were reduced in the summer of 2000—from $750 per MWh down to $500 on July 1 and from $500 to $250 on August 7—average prices actually rose. Other studies and experts we interviewed pointed out flaws in the design and implementation of these price caps that likely caused them to be ineffective. In addition to our own analysis, we interviewed economists and other industry experts to get their views on our methodology and results. Our findings are consistent with other studies by academics, industry experts, and staff of the CAISO. To determine whether the high wholesale electricity prices in summer 2000 were consistent with expected behavior in the absence of market power, we developed an econometric model to estimate the relationship between prices in the power exchange market and variables expected to influence the price, such as the quantity supplied, time of day, and price cap regulation. In this model, we estimated a regression in which the quantity supplied is divided into two parts: the part that is served by in- state generation, and the part served by imports. In general, greater quantity supplied is predicted to lead to higher prices. However, in discussions with experts on the California electricity market, we were told that the availability of imports has a large impact on prices in the state. In particular, imports have a damping effect on prices, because costs of generating electricity are lower in surrounding states than in California. In addition, they said that the highest cost suppliers during most hours are in- state generators, and it is these suppliers that set the market price for California. Therefore, we expect to find a negative relationship between imports and price and a positive relationship between in-state generation and price. The model included dummy variables for periods of time during which the CAISO had imposed price caps. We expected price cap variables to have a negative or insignificant impact on prices based on the theory of supply and demand. We also included dummy variables for different years, months, days of the week, and hours of the day, to account for unobserved variations in demand and costs over time. Finally, we included squared and cubed in-state generation terms to account for a possible non-linear relationship between price and quantity supplied of in-state generation.The final form of our regression is shown in the following equation. ) ) In the equation, price is the hourly price set in the power exchange market in an auction held one day ahead of when the electricity will be generated and consumed. “Gen” refers to in-state generation in MWhs, and “netimports” is the amount of electricity imported, minus the amount exported, also in MWhs. “Pcaps” is a vector of dummy variables for various price caps that existed in California between April 1, 1998, and the end of our analysis, October 31, 2000. “Timeperiods” is a vector of dummy variables for year, month, day of the week, and hour of the day. The results of the ordinary least squares regression are shown in table 2. A dummy variable takes a value of 1 if a certain characteristic is present and a value of 0 otherwise. It is important to note that we are not estimating either a demand or supply relationship. Explanatory variablesConstant In-state generation (In-state generation)(In-state generation)Net imports 250 Price Cap (1998-1999) 750 Price cap 500 Price cap 250 Price cap (2000) 1999 2000 February March April May June July August September October November December Sunday Monday Tuesday Thursday Friday Saturday 2 a.m. 3 a.m. 4 a.m. 5 a.m. 6 a.m. 7 a.m. 8 a.m. 9 a.m. 10 a.m. 11 a.m. 12 p.m. 1 p.m. 2 p.m. 3 p.m. Explanatory variables4 p.m. 5 p.m. 6 p.m. 7 p.m. 8 p.m. 9 p.m. 10 p.m. 11 p.m. 12 a.m. The regression estimates indicate a positive relationship between price and in-state generation, and a negative relationship between price and net- imports as expected. The relationship appears to be somewhat non-linear, although the non-linear terms are small in magnitude. The initial price cap of $250 per MWh, which was in place from July 18, 1998, through September 30, 1999, had no significant impact on prices. In addition, the $750 price cap appeared to have only a weak if any impact on price. However, lowering the price caps from $750 to $500 per MWh and again from $500 to $250 was associated with increases in average prices. This result is inconsistent with what would be expected under normal conditions, where a price cap can only cause prices to fall and then only during periods where the cap is lower than the market price. In addition to the least-squares regression results reported, we also performed various checks for robustness. First, we calculated standard errors for the regression coefficient estimates that are robust to heteroscedasticity (White, 1980), and to serial correlation and heteroscedasticity (Newey and West, 1987). The statistical results did not qualitatively change from the reported regression in that lowering the price caps from $750 to $500 per MWh, and then from $500 to $250, resulted each time in a statistically significant increase in the market clearing price. Second, because the residuals of the least-squares regression indicated first-order serial correlation, we estimated the regression controlling for the serial correlation and obtained qualitatively similar results using both least-squares standard errors and robust (White, 1980) standard errors. Third, we analyzed the impact of extreme observations on the regression results because, in least-squares models, the expected value of the market- clearing price and large deviations from the mean can disproportionately affect the coefficient estimates. We estimated a bounded influence regression (Kraskel, Kuh, and Welsch, 1983) in which extreme outlying observations are down weighted; the results of this regression were similar to the least-squares regression although the $250 price cap had a much larger positive effect on the market price. We also estimated a quantile regression in which the median of the market price was modeled because the median is not sensitive to extreme outliers (see Judge et al., 1985, ch. 20); the results of the median regression were similar to the least-squares regression results, although the magnitude of the $250 price cap was again larger. On balance, the regression results indicate that when price caps were lowered in 2000, average prices rose—a result that is potentially inconsistent with competitive conditions but which does not directly indicate the existence of market power. Average prices could have risen due to changes in some other factors that influenced electricity prices and coincided with the lowering of the price caps, but that are not accounted for in the regression model. For example, the increase in prices could have been caused by increases in costs that coincided with the lowering of the price caps. To determine whether there was evidence of market power, we had to explore other possible explanations for the unexpected regression results. In order to explore the possibility that market power was being exercised, we focused our attention on the period from July 1, 2000—the day the $500 price cap was implemented—through October 31—the last date of our regression analysis. This period encompassed the date the $250 per MWh price cap was implemented—August 7, 2000. As discussed in this report, prices did not follow the pattern expected under competitive conditions during the period of the $250 price cap. In an attempt to explore other possible explanations for the observed pattern of prices, we compared two periods—the first from July 1 through August 6, 2000, during which the price cap was set at $500 per MWh, and the second from August 7 through October 31, 2000, during which the cap was $250. In particular, we evaluated the increase in average prices and the change in the pattern of prices that occurred when the price cap was lowered from $500 to $250 per MWh. Figure 4 shows prices and total electricity demand in the two periods in which the price caps were $500 and $250 per MWh, respectively. While lowering the price cap was associated with falling prices in the two highest demand periods, it was also associated with rising prices in all other periods. As discussed in this report, the increased prices in the lower-demand periods are not consistent with competitive pricing if all other factors are held constant. However, we cannot conclude directly from this pattern that market power was the cause of the increases in prices or changes in the pattern of prices. Therefore, we evaluated numerous variables, including imports of electricity from surrounding states, last-minute purchases of electricity to balance demand and supply by the CAISO, prices of natural gas (the principle fuel used by many of the in-state generators), total demand, and total in-state generation. We found that these variables were all changing over time, but in ways that should have led to lower rather than higher prices or were of insufficient magnitude to explain the price increase. For example, imports and net-imports (imports minus exports) of electricity into California were higher during low- and high-demand hours in the $250-price-cap period than in the $500-price-cap period. Our regression results indicate that higher levels of net imports are associated with lower prices. Therefore, changes in net imports are unlikely to explain the increase in prices during low-demand hours after the price cap was lowered to $250 per MWh. In addition, levels of in-state generation are also lower during low- and high-demand hours in the $250-price-cap period than when the cap was $500 per MWh. Our regression results indicate that lower levels of in-state generation are associated with lower prices. Therefore, levels of in-state generation are unlikely to account for the increases and change in pattern of prices after the cap was lowered to $250 per MWh. Similarly, total demand was lower during low- and high- demand hours in the $250-price-cap period. As discussed in this report, lower levels of total demand are predicted to be associated with lower prices. Therefore levels of total demand cannot account for the observed price changes. Last-minute purchases of electricity to balance the system were relatively unchanged over the two periods. In particular, purchases of regulation electricity, spinning reserves, and non-spinning reserves were close in both low- and high-demand hours in the two periods. Among the variables affecting electricity prices was the price of natural gas, which rose over this period. While this increase would be expected to cause electricity prices to rise under competitive conditions, prices would likely rise in proportion to the change in natural gas prices. However, electricity prices during low-demand hours rose by much more, proportionally, than did average natural gas prices, which means that gas prices are not likely to fully explain the increase in prices during these hours. In addition to evaluating these variables, we discussed our findings with two economists with electricity market expertise. They agreed that the pattern of prices we observed was not consistent with competitive conditions. In particular, they said that the fact that prices rose so much at the lowest levels of demand indicates that suppliers changed their behavior in response to the price cap. Both also agreed that the price cap caused some suppliers to avoid the capped prices, either by withholding some of their electricity from the power exchange and offering it directly to the CAISO at the last minute and at higher prices or by selling it in surrounding states, where prices were at times higher than $250. Further, they said that this withholding of power from the capped market would likely have caused other suppliers to increase their asking price for electricity, knowing that they faced less competition. The economists said that such a change in behavior is only consistent with the existence of market power, because suppliers who do not have market power treat prices as given and do not take actions designed to achieve a higher price. We also discussed this period of time with staff of the CAISO. In discussions with the CAISO, we were told that sellers were able to partially avoid the price cap by selling some of their power outside the state—perhaps to an affiliated company—and then buying it back to sell to the CAISO at the last minute when the CAISO was desperate to balance demand and supply and therefore willing to pay prices above the capped rate. We did not have data on specific transactions between suppliers and buyers outside of the capped market, and we had no data on out of state sales or prices. Therefore, we could not verify that supplier behavior changed in the ways suggested by the economists we interviewed and CAISO staff. However, we were able to look at aggregate levels of exports of electricity from California to surrounding states. We found that monthly exports were significantly higher from May through October 2000, than they had been in these same months in 1998 or 1999. Specifically, monthly exports from May through October 2000 were between about 40 and 230 percent higher than the same months in 1998 or 1999. Overall, exports were about 200 percent higher from May through October 2000 than in the same period in either 1998 or 1999. In balance, the combination of the results of our econometric analysis, our analysis of prices and other variables in the period surrounding the change in the price cap from $500 to $250 per MWh, and the interpretation of the economists and CAISO staff we interviewed provide evidence that suppliers were able to exercise market power during the period after the $250 price cap was implemented. The results of our regression indicated that average prices rose when the caps were lowered during summer 2000. This pattern was inconsistent with our expectations about the impact of price caps. To explain these inconsistent results, we reviewed other studies and interviewed economists and other experts. There was broad agreement that flaws in the design and implementation of the price caps led to their being ineffective as tools for mitigating market power. The flaws in design identified by studies and experts relate to the ability of suppliers to avoid the price caps and sell at prices above the cap. In particular, the cap was imposed by the CAISO to limit what CAISO would pay for power in the last-minute markets. However, California is a part of a larger western regional market, and the CAISO cap did not apply to other states. Therefore, when prices in other states rose above the CAISO price cap, suppliers in California had an incentive to sell their electricity to other states. As a result, the CAISO found that it was forced to buy a larger share of the total electricity consumed in a given hour during the last minutes before it was needed to meet demand. CAISO was also faced at times with paying prices higher than the cap to avoid electricity shortages and forced blackouts for some consumers. According to one study, the inability of the CAISO to commit to maintaining the cap, even at the risk of blackouts, gave suppliers a bargaining advantage in setting their prices for sales to the CAISO at the last minute. The authors examined the degree of competition in the California electricity market from June 1998 to October 2000. They compared market prices, using pricing data from the power exchange, with estimates of marginal costs of producing additional electricity. The authors tested whether the overall market was setting competitive prices considering the production capabilities of all suppliers in the market. The analysis included such cost factors as fuel costs; maintenance costs; and costs for emissions control, a regulatory requirement for some geographic locations. Adjustments were not made for costs related to inefficient transmission of power between geographic areas. Using the cost data, the authors computed the perfectly competitive price for each hour for the months in the sample period. The authors then categorized higher expenditure for wholesale purchases of electricity during the summer of 2000 into increases in production cost, scarcity, and the exercise of market power. The authors found that 51 percent of total electricity expenditures in the summer of 2000 could be attributed to market power. They note that market power was most commonly exercised during peak demand periods. The authors simulated competitive prices under various demand and supply conditions that existed during the summer of 2000. They then used public data on production on an hourly basis from EPA and other public sources and compared the actual prices from this data with their estimated competitive wholesale benchmark prices. The benchmark price was the short-run cost of supplying electricity from the last unit that would clear the market in each hour. Factors such as fuel prices and costs for emissions control to meet environmental requirements were included in the analysis. These authors found that wholesale prices far exceeded competitive levels during the months of June through September 2000. They noted that evidence supports the conclusion that power was withheld from the market by electricity suppliers, which contributed to the high prices during the summer of 2000. This CAISO economist evaluated electricity prices in California for the period of April 1998 through February 2001. This study compared the difference between actual wholesale prices in the CAISO system with an estimate of baseline costs that would be incurred under competitive market conditions. He included in this analysis the potential impacts of emissions costs and price impacts from hours when supply was scarce. The results of the analysis showed that market power was being exercised for the period evaluated, between May 2000 and February 2001. The author estimated that overall wholesale costs during that period had been driven up by more than $6.2 billion by the exercise of market power and that over 30 percent of wholesale electricity costs during the year prior to his study could be attributed to market power. This CAISO economist reviewed bids from five large in-state non-investor owned utility suppliers, as well as 16 importers selling electricity in the real-time market of the CAISO for each hour between May and November 2000. She compared detailed bidding data in the real-time market to the marginal cost of supplying energy and analyzed the level of mark-up for each supplier. The author used real-time data from the CAISO market for specific companies and generation units. She found bidding behavior that was not consistent with competitive bidding. Further, she found that wholesale suppliers displayed forms of physical and/or economic withholding of electricity for the purposes of inflating prices. The author concluded that large suppliers were actively engaged in bidding behavior that had a direct impact on market prices, and she noted that this behavior indicated systematic exercise of market power to maximize profits. The California State Auditor reviewed the operations of the power exchange and CAISO, including prices in the electricity market for the period April 1998 through December 2000. Consultants for the State Auditor reviewed reports and statistical and econometric models used by various monitoring, and/or market analysis groups of the power exchange and CAISO. They also interviewed respective members of these organizations. The State Auditor concluded that market participants adopted tactics to manipulate wholesale electricity prices in California. The authors noted that bidding data from the last year prior to the March 2001 issuance of their report suggested that both buyers and sellers deliberately attempted to manipulate electricity prices. Market participants utilized bidding strategies that held back needed supply, which then forced the CAISO to make purchases at exorbitant prices to guarantee system reliability. United States General Accounting Office, Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Stoft, Steven. “Power System Economics: Designing Markets for Electricity.” IEEE/Wiley, 2002. McCullough, Robert. “Revisiting Market Power After Two Years.” Public Utilities Fortnightly, April 1, 2002 Joskow, Paul, and Edward Kahn. “A Quantitative Analysis of Pricing Behavior in California’s Wholesale’s Electricity Market During the Summer 2000: The Final Word.” (unpublished). February 4, 2002. Borenstein, Severin, James Bushnell and Frank Wolak. “Measuring Market Inefficiencies in California’s Restructured Wholesale Electricity Market.” (unpublished). February 2002. Borenstein, Severin. “The Trouble With Electricity Markets: Understanding California’s Restructuring Disaster.” Journal of Economic Perspectives, Volume 16, Number 1, (Winter 2002): pages 191-211. California Independent System Operator, Department of Market Analysis. Third Annual Report on Market Issues and Performance, Market Monitoring, Investigative, and Compliance Activities, January- December 2001. Folsom, California, January 2002. Harvey, Scott, M. and William W. Hogan. “Identifying the Exercise of Market Power in California.” (unpublished). December 28, 2001 Harvey, Scott, M. and William Hogan. “Market Power and Withholding.” (unpublished). December 20, 2001. California State Auditor. California Energy Markets, Pressures Have Eased but Cost Risks Remain. Sacramento, California, December 2001. Harvey, Scott, M., and William Hogan. “Further Analysis of the Exercise of Market Power in the California Electricity Market.” (unpublished). November 21, 2001. Morey, Matthew, J. Ensuring Sufficient Generating Capacity, During the Transition to Competitive Electricity Markets. prepared for Edison Electric Institute, Washington, D.C.: November 2001. Borenstein, Severin, James Bushnell, Christopher R. Knittel, and Catherine Wofram. “Trading Inefficiencies in California’s Electricity Markets.” (unpublished). October 2001. California Independent System Operator, Department of Market Analysis. Second Annual Report on Market Issues and Performance. April 1999- December 2000. Folsom, California: November 2001. Wolak, Frank A. “Designing a Competitive Electricity Market that Benefits Consumers.” (unpublished). October 15, 2001. Joskow, Paul L. “California’s Electricity Crisis” (unpublished). September 28, 2001. Congressional Budget Office. Causes and Lessons of the California Electricity Crisis. Washington, D.C., September 2001. California Energy Commission. California Energy Outlook, Electricity and Natural Gas Trends Report. Staff Draft, Sacramento, California, September 2001. Hirst, Eric. “The California Electricity Crisis: Lessons for Other States.” (unpublished). July 10, 2001. Joskow, Paul and Edward Kahn. “Identifying the Exercise of Market Power: Refining the Estimates.” (unpublished). July 5, 2001. Taylor, Jerry, and Peter VanDoren. “California’s Electricity Crisis What’s Going On, Who’s to Blame, and What to Do.” Policy Analysis, No. 406, (July 3, 2001). United States General Accounting Office. Energy Markets: Results of Studies Assessing High Electricity Prices in California. GAO-01-857. Washington, D.C.: June 29, 2001. United States General Accounting Office. California Electricity Market: Outlook for Summer 2001. GAO-01-870R. Washington, D.C.: June 29, 2001. United States General Accounting Office. California Electricity Market Options for 2001: Military Generation and Private Backup Possibilities. GAO-01-865R. Washington, D.C.: June 29, 2001. Electric Power Research Institute. The Western States Power Crisis: Imperatives and Opportunities, an EPRI White Paper. Palo Alto, California, June 25, 2001. Department of Market Analysis, California Independent System Operator. Potential Overpayments Due to Market Power in California’s Wholesale Energy Market: May 2000-2001. Folsom, California: June 19, 2001. Rowe, John W., Peter Thornton, and Janet Bieniak Szcypinski. Competition Without Chaos. Joint Center for Regulatory Studies, Working Paper 01-07, June 2001. Hogan, William W. Electricity Market Restructuring: Reforms of Reforms. Paper presented at annual conference of Center for Research in Regulated Industries, Rutgers University: May 23-25, 2001. Harvey, Scott M. “On the Exercise of Market Power Through Strategic Withholding in California.” (unpublished). April 24, 2001. Hildebrandt, Eric. Impacts of Market Power in California’s Wholesale Energy Market: More Detailed Analysis Based on Individual Seller Schedules and Transactions in the ISO and PX Markets. A special study by the Department of Market Analysis, California Independent System Operator, Folsom, California, April 9, 2001. Sheffrin, Anjali. Empirical Evidence of Strategic Bidding in California ISO Real Time Market. A special study by the Department of Market Analysis, California Independent System Operator, Folsom, California: March 21, 2001. Harvey, Hal, Bentham Paulos and Eric Heitz. California and the Energy Crisis: Diagnosis and Cure. Energy Foundation, March 8, 2001. Hildebrant, Eric. Further Analyses of the Exercise and Cost Impacts of Market Power in California’s Wholesale Energy Market. A special study by the Department of Market Analysis, California Independent System Operator, Folsom, California: March, 2001. Edison Electric Institute. Learning from California: Power Shortages and Unique Market Rules Lead to Price Spikes. Washington, D.C.: March 2001. California State Auditor. Energy Deregulation: The Benefits of Competition Were Undermined by Structural Flaws in the Market, Unsuccessful Oversight, and Uncontrollable Competitive Forces. Sacramento, California: March 22, 2001. California Independent System Operator, Department of Market Analysis. Report on Real Time Supply Costs above Single Price Auction Threshold: December 8, 2000-January 31, 2001. Folsom, California: February 28, 2001. Federal Energy Regulatory Commission, Office of the General Counsel, Market Oversight and Enforcement and Office of Markets, Tariffs and Rates, Division of Energy Markets. Report on Plant Outages in the State of California. Washington, D.C.: February 1, 2001. Joskow, Paul and Edward Kahn. “A Quantitative Analysis of Pricing Behavior in California’s Wholesale Electricity Market During Summer 2000.” (unpublished). January 2001. Borenstein, Severin. “The Trouble With Electricity Markets (and some solutions). (unpublished). January, 2001. Chandley, John, D., Scott M. Harvey, and William W. Hogan. “Electricity Reform in California.” (unpublished). November 22, 2000. Federal Energy Regulator Commission. Staff Report on Western Markets and the Causes of the Summer 2000 Price Abnormalities: Part I of the Staff Report on U.S. Bulk Power Markets. Washington, D.C.: November 1, 2000. Puller, Steven L. “Pricing and Firm Conduct in California’s Deregulated Electricity Market”. (unpublished). November 2000. Marcus, William and Jan Hamrin. “How We Got Into the California Energy Crisis.” (unpublished). November 2000. Harvey, Scott, M. and William W. Hogan. “Issues in the Analysis of Market Power in California.” (unpublished). October 27, 2000. Barker, Dunn and Rossi, Inc. “The Electric Summer: Symptoms-Options- Solutions.” A special report prepared for Edison Electric Institute, October 2000. Nordhaus, Robert, Frank A. Wolak, and Carl Shapiro. An Analysis of the June 2000 Price Spikes in the California ISO’s Energy and Ancillary Services Market. A special report prepared for the Market Suveillance Committee of the California Independent System Operator, September 6, 2000. California Independent System Operator, Department of Market Analysis. Report on California Energy Market Issues and Performance: May-June 2000, Special Report. Folsom, California: August 10, 2000. California Public Utilities Commission and Electricity Oversight Board. Summer 2000 Report for Governor Davis Regarding California’s Electricity System. San Francisco, California, August 2, 2000. Borenstein, Severin, James Bushnell, and Frank Wolak. “Diagnosing Market Power in California’s Restructured Wholesale Electricity Market.” (unpublished). August 2000. Borenstein, Severin, and James Bushnell. “Electricity Restructuring: Deregulation or Reregulation?” (unpublished). February 2000. Bushnell, James, B. and Frank A. Wolak. “Regulation and the Leverage of Local Market Power in the California Electricity Market.” (unpublished). September 1999. Borenstein, Severin. “Understanding Competitive Pricing and Market Power in Wholesale Electricity Markets” (unpublished). August 1999. Borenstein, Severin, James Bushnell, and Christopher R. Knittel. “Market Power in Electricity Markets: Beyond Concentration Measures.” (unpublished). February 1999. Tirole, Jean. The Theory of Industrial Organization. The MIT Press, 1988. In addition to those named above, Art James, Randy Jones, Jon Ludwigson, Cynthia Norris, and Frank Rusco made key contributions to this report.
Historically, utility monopolies have generated electricity and sold it to customers at prices set by state regulators. Today, private companies in 24 states compete to sell electricity at market prices determined by supply and demand. California is part of a broader western market in which electricity is routinely bought and sold across state and national boundaries. GAO found evidence that wholesale electricity suppliers exercised market power by raising prices above competitive levels during the summer of 2000 and at other times after the restructuring. Neither GAO's analysis nor other studies addressed whether market power exercised in California violated federal or other laws. The design of California's electricity market enabled individual wholesale electricity suppliers to exercise market power. Once prices rose, the design was ineffective in returning prices to competitive levels. Prominent experts on market design and industry experts generally agree that two principal market designs flaws increased wholesale suppliers' incentive and ability to raise prices above competitive levels: (1) retail prices were frozen and (2) the California Public Utilities Commission generally prohibited or discouraged long-term contracts between utilities and wholesale suppliers.
Ensuring the security of our nation’s commercial aviation system has been a long-standing concern. As demonstrated by the 1988 bombing of a U.S. airliner over Lockerbie, Scotland, and the 1995 plot to blow up as many as 12 U.S. aircraft in the Pacific region discovered by Philippine authorities, U.S. aircraft have long been a target for terrorist attacks. Many efforts have been made to improve aviation security, but as we and others have documented in numerous reports and studies, weaknesses in the system continue to exist. It was these weaknesses that terrorist exploited to hijack four commercial aircraft in September 2001, with tragic results. On November 19, 2001, the President signed into law the Aviation and Transportation Security Act, with the primary goal of strengthening the security of the nation’s aviation system. ATSA created TSA as an agency within the Department of Transportation with responsibility for securing all modes of transportation, including aviation. ATSA mandated specific improvements to aviation security and established deadlines for completing many of them. TSA’s main focus during its first year of operation was on meeting these ambitious deadlines, particularly federalizing the screener workforce at commercial airports nationwide by November 19, 2002, while at the same time establishing a new federal organization from the ground up. The Homeland Security Act, signed into law on November 25, 2002, transferred TSA from the Department of Transportation to the new Department of Homeland Security. Virtually all aviation security responsibilities now reside with TSA, including the screening of air passengers and baggage, a function that had previously been the responsibility of air carriers. TSA is also responsible for ensuring the security of air cargo and overseeing security measures at airports to limit access to restricted areas, secure airport perimeters, and conduct background checks for airport personnel with access to secure areas, among other responsibilities. TSA has implemented numerous initiatives designed to enhance aviation security but has collected little information on the effectiveness of these initiatives. ATSA requires that TSA establish acceptable levels of performance and develop annual performance plans and reports to measure and document the effectiveness of its security initiatives.Although TSA has developed these performance tools, as required by ATSA, it currently focuses on progress toward meeting ATSA deadlines, rather than on the effectiveness of its programs and initiatives. However, TSA is taking steps to collect objective data to assess its performance. TSA currently has limited information on the effectiveness of its aviation security initiatives. As we reported in September 2003, the primary source of information collected on screeners’ ability to detect threat objects is the covert testing conducted by TSA’s Office of Internal Affairs and Program Review. However, TSA does not consider the results of these covert tests to be a measure of performance but rather a “snapshot” of a screener’s ability to detect threat objects at a particular point in time, and as a system-wide performance indicator. At the time we issued our report, the Office of Internal Affairs and Program Review had conducted 733 covert tests of passenger screeners at 92 airports. Therefore, only about 1 percent of TSA’s nearly 50,000 screeners had been subject to a covert test. In addition to conducting covert tests at screening checkpoints, TSA conducts tests to determine whether the current Computer-Assisted Passenger Screening System is working as designed, threat objects are detected during the screening of checked baggage, and access to restricted areas of the airport is limited only to authorized personnel. While the Office of Internal Affairs has conducted about 2,000 access tests, it has conducted only 168 Computer-Assisted Passenger Screening System and checked baggage tests. Based on an anticipated increase in staff from about 100 in fiscal year 2003 to 200 in fiscal year 2004, the Office of Internal Affairs and Program Review plans to conduct twice as many covert tests next year. Another key source of data on screener performance in detecting threat objects is the Threat Image Projection (TIP) system, which places images of threat objects on the X-ray screen during actual operations and records whether screeners identify the threat object. The Federal Aviation Administration began deploying TIP in late 1999 to continuously measure screener performance and to train screeners in becoming more adept at detecting hard-to-spot threat objects. However, TIP was shut down immediately following the September 11 terrorist attacks because of concerns that it would result in screening delays and panic, as screeners might think that they were actually viewing a threat object. Although TSA officials recognized that TIP is a key tool in measuring, maintaining, and enhancing screener performance, they only recently began reactivating TIP on wide-scale basis because of competing priorities, a lack of training, and a lack of resources needed to deploy TIP activation teams. Once TIP is fully deployed and operational at every checkpoint at all airports, as it is expected to be in April 2004, TSA headquarters and federal security directors will have the capability to analyze this performance data in a number of ways, including by individual screeners, checkpoints, terminals, and airports. When fully deployed, the annual screener recertification test results will provide another source of data on screener performance. ATSA requires that TSA collect performance information on each screener through conducting an annual proficiency review to ensure he or she continues to meet all qualifications and standards required to perform the screening function. Although TSA began deploying federal screeners to airports in April 2002, TSA only recently began implementing the annual recertification program and does not expect to complete testing at all airports until March 2004. The recertification testing is comprised of three components: (1) image recognition; (2) knowledge of standard operating procedures; and (3) practical demonstration of skills, to be administered by a contractor. TSA officials consider about 28,000 screeners as having already completed the first two components because they successfully passed competency tests TSA administered at many airports as part of a screener workforce reduction effort. However, these competency tests did not include the third component of TSA’s planned annual screener recertification program—the practical demonstration of skills. TSA officials awarded a contract for this component of the annual proficiency reviews in September 2003. TSA’s Performance Management Information System for passenger and baggage screening operations is designed to collect performance data, but it currently contains little information on screener performance in detecting threat objects. The Performance Management Information System collects a wide variety of metrics on workload, staffing, and equipment and is used to identify some performance indicators, such as the level of absenteeism, the average time for equipment repairs, and the status of TSA’s efforts to meet goals for 100 percent electronic baggage screening. However, the system does not contain any performance metrics related to the effectiveness of passenger screeners. TSA is planning to integrate performance information from various systems into the Performance Management Information System to assist the agency in making strategic decisions. TSA further plans to continually enhance the system as it learns what data are needed to best manage the agency. In addition to making improvements to the Performance Management Information System, TSA is currently developing performance indexes for both individual screeners and the screening system as a whole. The screener performance index will be based on data such as the results of performance evaluations and recertification tests, and the index for the screening system will be based on information such as covert test results and screener effectiveness measures. TSA has not yet fully established its methodology for developing the indexes, but it expects to have the indexes developed by the end of fiscal year 2004. In conjunction with measuring the performance of its passenger screening operations, TSA must also assess the performance of the five pilot airports that are currently using contract screeners to determine the feasibility of using private screening companies instead of federal screeners. Although ATSA allows airports to apply to opt out of using federal screeners beginning in November 2004, TSA has not yet determined how to evaluate and measure the performance of the pilot program. In early October 2003, TSA awarded a contract to BearingPoint, Inc., to compare the performance of pilot screening with federal screening, including the overall strengths and weaknesses of both systems, and determine the reasons for any differences. The evaluation is scheduled to be completed by March 31, 2004. TSA has acknowledged that designing an effective evaluation of the screeners at the pilot airports will be challenging because key operational areas, including training, assessment, compensation, and equipment, have to a large extent been held constant across all airports, and therefore are not within the control of the private screening companies. In its request for proposal for the pilot airport evaluation, TSA identified several data sources for the evaluation, including the Performance Management Information System and the Office of Internal Affairs and Program Review’s covert testing of passenger screeners. However, as we recently reported, data from both of these systems in measuring the effectiveness of screening operations is limited. As a result, it will be a challenge for TSA to effectively compare the performance of the contract pilot airports with the performance of airports using federal screeners. TSA has recognized the need to strengthen the assessment of its performance, and has initiated efforts to develop and implement strategic and performance plans to clarify goals, establish performance measures, and measure the performance of its security initiatives. Strategic plans are the starting point for an agency’s planning and performance measurement efforts. Strategic plans include a comprehensive mission statement based on the agency’s statutory requirements, a set of outcome-related strategic goals, and a description of how the agency intends to achieve these goals. The Government Performance and Results Act (GPRA) establishes a framework for strategic plans that requires agencies to clearly establish results-oriented performance goals in strategic and annual performance plans for which they will be held accountable, measure progress toward achieving those goals, determine the strategies and resources to effectively accomplish the use performance information to make programmatic decisions necessary to improve performance, and formally communicate results in performance reports. Although the Department of Homeland Security plans to issue one strategic plan for the Department, it plans to incorporate strategic planning efforts from each of its component agencies. TSA recently completed a draft of its input into the Department of Homeland Security’s strategic plan. TSA officials stated that the draft is designed to ensure their security initiatives are aligned with the agency’s goals and objectives, and that these initiatives represent the most efficient use of their resources. TSA officials submitted the draft plan to stakeholders in September 2003 for their review and comment. The Department of Homeland Security plans to issue its strategic plan by the end of the year. In addition to developing a strategic plan, TSA is developing a performance plan to help it evaluate the current effectiveness and levels of improvement in its programs, based on established performance measures. TSA submitted to the Congress a short-term performance plan in May 2003, as required by ATSA, that included performance goals and objectives. The plan also included an initial set of 32 performance measures, including the percentage of bags screened by explosive detection systems and the percentage of screeners in compliance with training standards. However, these measures were primarily output-based (measuring whether specific activities were achieved) and did not measure the effectiveness of TSA’s security initiatives. TSA officials acknowledge that the goals and measures included in the report were narrowly focused, and that in moving forward additional performance- based measures are needed. In addition to developing a short-term performance plan, ATSA also requires that TSA develop a 5-year performance plan and annual performance report, including an evaluation of the extent to which its goals and objectives were met. TSA is currently developing performance goals and measures as part of its annual planning process and will collect baseline data throughout fiscal year 2004 to serve as a foundation for its performance targets. TSA also plans to increase its focus on measuring the effectiveness of various aspects of the aviation security system in its 5- year performance plan. According to TSA’s current draft strategic plan, which outlines its overall goals and strategies for fiscal years 2003 through 2008, its efforts to measure the effectiveness of the aviation security system will include random and scheduled reviews of the efficiency and effectiveness of oversight of compliance with security standards and approved programs through a combination of inspections, testing, interviews, and record reviews—to include TIP; measurement of performance against standards to ensure expected standards are met and to drive process improvements; and collection and communication of performance data using a state-of-the- art data collection and reporting system. In our January 2003 report on TSA’s actions and plans to build a results- oriented culture, we recommended next steps that TSA should take to strengthen its strategic planning efforts. These steps include establishing security performance goals and measures for all modes of transportation that involves stakeholders, and applying practices that have been shown to provide useful information in agency performance plans. We also identified practices that TSA can apply to ensure the usefulness of its required 5-year performance plan to TSA managers, the Congress, and other decision makers or interested parties. Table 1 outlines the practices we identified for TSA. TSA agreed with our recommendation and plans to incorporate these principles into the data it provides DHS for the department’s 5-year performance plan and annual performance report. DHS plans to complete its 5-year performance plan and annual performance report by February 2004, as required by GPRA. The Congress has also recognized the need for TSA to collect performance data and, as part of the Federal Aviation Administration’s (FAA) reauthorization act—Vision 100: Century of Aviation Reauthorization Act—is currently considering a provision that would require the Secretary of the Department of Homeland Security to conduct a study of the effectiveness of the aviation security system. As TSA moves forward in addressing aviation security concerns, it needs adequate tools to ensure that its efforts are appropriately focused, strategically sound, and achieving expected results. Because of limited funding, TSA needs to set priorities so that its resources can be focused and directed to those aviation security enhancements most in need of implementation. In recent years, we have consistently advocated the use of a risk management approach to respond to various national security and terrorism challenges, and have recommended that TSA apply this approach to strengthen security in aviation as well as in other modes of transportation. TSA agreed with our recommendation and is adopting a risk management approach. Risk management is a systematic and analytical process to consider the likelihood that a threat will endanger an asset, an individual, or a function and to identify actions to reduce the risk and mitigate the consequences of an attack. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from existing or potential threats can help reduce it. Accordingly, a risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions. The purpose of this approach is to link resources with efforts that are of the highest priority. Figure 1 describes the risk management approach. A threat assessment identifies and evaluates potential threats on the basis of factors such as capabilities, intentions, and past activities. This assessment represents a systematic approach to identifying potential threats before they materialize, and is based on threat information gathered from both the intelligence and law enforcement communities. However, even if updated often, a threat assessment might not adequately capture some emerging threats. The risk management approach, therefore, uses vulnerability and criticality assessments as additional input to the decision-making process. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options to address those weaknesses. In general, a vulnerability assessment is conducted by a team of experts skilled in such areas as engineering, intelligence, security, information systems, finance, and other disciplines. A criticality assessment evaluates and prioritizes assets and functions in terms of specific criteria, such as their importance to public safety and the economy. The assessment provides a basis for identifying which structures or processes are relatively more important to protect from attack. As such, it helps managers to determine operational requirements and target resources at their highest priorities, while reducing the potential for targeting resources at lower priorities. Figure 2 illustrates how the risk management approach can guide decision making and shows that the highest risks and priorities emerge where the three elements of risk management overlap. For example, an airport that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and therefore would be a higher priority for funding compared with an airport that is only vulnerable to attack. In this vein, aviation security measures shown to reduce the risk to the most critical assets would provide the greatest protection for the cost. Over the past several years, we have concluded that comprehensive threat, vulnerability, and criticality assessments are key in better preparing against terrorist attacks, and we have recommended that TSA apply this risk management approach to strengthen security in aviation. TSA agreed with our recommendation and is adopting a risk management approach in an attempt to enhance security across all modes of transportation. According to TSA officials, once established, risk management principles will drive all decisions—from standard setting to funding priorities to staffing. TSA has not yet fully implemented its risk management approach, but it has taken steps in this direction. Specifically, TSA’s Office of Threat Assessment and Risk Management is developing four assessment tools that will help assess threats, criticality, and vulnerabilities. Figure 3 illustrates TSA’s threat assessment and risk management approach. The first tool, which will assess criticality, will determine a criticality score for a facility or transportation asset by incorporating factors such as the number of fatalities that could occur during an attack and the economic and sociopolitical importance of the facility or asset. This score will enable TSA, in conjunction with transportation stakeholders, to rank facilities and assets within each mode and thus focus resources on those that are deemed most important. TSA is working with another Department of Homeland Security office—the Information and Analysis Protection Directorate—to ensure that the criticality tool will be consistent with the Department’s overall approach for managing critical infrastructure. A second tool—the Transportation Risk Assessment and Vulnerability Tool (TRAVEL)—will assess threats and analyze vulnerabilities at those transportation assets TSA determines to be nationally critical. The tool will be used in a TSA-led and facilitated assessment that will be conducted on the site of the transportation asset. Specifically, the tool will assess an asset’s baseline security system and that system’s effectiveness in detecting, deterring, and preventing various threat scenarios, and it will produce a relative risk score for potential attacks against a transportation asset or facility. In addition, TRAVEL will include a cost-benefit component that compares the cost of implementing a given countermeasure with the reduction in relative risk to that countermeasure. TSA is working with economists to develop the cost-benefit component of this model and with the TSA Intelligence Service to develop relevant threat scenarios for transportation assets and facilities. According to TSA officials, a standard threat and vulnerability assessment tool is needed so that TSA can identify and compare threats and vulnerabilities across transportation modes. If different methodologies are used in assessing the threats and vulnerabilities, comparisons could be problematic. However, a standard assessment tool would ensure consistent methodology. A third tool—the Transportation Self-Assessment Risk Module (TSARM)— will be used to assess and analyze vulnerabilities for assets that the criticality assessment determines to be less critical. The self-assessment tool included in TSARM will guide a user through a series of security- related questions in order to develop a comprehensive security baseline of a transportation entity and will provide mitigating strategies for when the threat level increases. For example, as the threat level increases from yellow to orange, as determined by the Department of Homeland Security, the assessment tool might advise an entity to take increased security measures, such as erecting barriers and closing selected entrances. TSA had deployed one self-assessment module in support of targeted maritime vessel and facility categories. The fourth risk management tool that TSA is currently developing is the TSA Vulnerability Assessment Management System (TVAMS). TVAMS is TSA’s intended repository of criticality, threat, and vulnerability assessment data. TVAMS will maintain the results of all vulnerability assessments across all modes of transportation. This repository will provide TSA with data analysis and reporting capabilities. TVAMS is currently in the conceptual stage and requirements are still being gathered. TSA is now using components of these risk management tools and is automating others so that the components can be used remotely by stakeholders, such as small airports, to assess their risks. For example, according to TSA officials, TSA has conducted assessments at 9 of 443 commercial airports using components of its TRAVEL tool. Three of these assessments were conducted at category X airports (the largest and busiest airports), and the remaining 6 assessments were conducted at airports in lower categories. TSA plans to conduct approximately 100 additional assessments of commercial airports in 2004 using TRAVEL and plans to begin compiling data on security vulnerability trends in 2005. Additionally, TSA plans to fully implement and automate its risk management approach by September 2004. In addition to collecting performance data and implementing a risk management approach, TSA faces a number of other programmatic and management challenges in strengthening aviation security. These challenges include implementing the new Computer-Assisted Passenger Prescreening System; strengthening baggage screening, airport perimeter and access controls, air cargo, and general aviation security; managing the costs of aviation security initiatives; and managing human capital. TSA has been addressing these challenges through a variety of efforts. We have work in progress that is examining TSA’s efforts in most of these areas, and we will be reporting on TSA’s progress in the future. ATSA authorized TSA to develop a new Computer-Assisted Passenger Prescreening System, or CAPPS II. This system is intended to replace the current Computer-Assisted Passenger Screening program, which was developed in the mid-1990s by the Federal Aviation Administration to enable air carriers to identify passengers requiring additional security attention. The current system is maintained as a part of the airlines’ reservation systems and, operating under federal guidelines, uses a number of behavioral characteristics to select passengers for additional screening. In the wake of the September 11, 2001, terrorist attacks, a number of weaknesses in the current prescreening program were exposed. For example, although the characteristics used to identify passengers for additional screening are classified, several have become public knowledge through the press or on the Internet. Although enhancements have been made to address some of these weaknesses, the behavioral traits used in the system may not reflect current intelligence information. It is also difficult to quickly modify the system to respond to real-time changes in threats. Additionally, because the current system operates independently within each air carrier reservation system, changes to each air carrier’s system to modify the prescreening system can be costly and time- consuming. In contrast, CAPPS II is planned to be a government-run program that will provide real–time risk assessment for all airline passengers. Unlike the current system, TSA is designing CAPPS II to identify and compare personal information with commercially available data to confirm a passenger’s identity. The system will then run the identifying information against government databases and generate a “risk” score for the passenger. The risk score will determine the level of screening that the passenger will undergo before boarding. TSA currently estimates that initial implementation of CAPPS II will occur during the fall of 2004, with full implementation expected by the fall of 2005. TSA faces a number of challenges that could impede their ability to implement CAPPS II. Among the most significant are the following: concerns about travelers’ privacy rights and the safeguards established to protect passenger data; the accuracy of the databases being used by the CAPPS II system and whether inaccuracies could generate a high number of false positives and erroneously prevent or delay passengers from boarding their flights; the length of time that data will be retained by TSA; the availability of a redress process through which passengers could get erroneous information corrected; concerns that identify theft, in which someone steals relevant data and impersonates another individual to obtain that person’s low risk score, may not be detected and thereby negate the security benefits of the system; and obtaining the international cooperation needed for CAPPS II to be fully effective, as some countries consider the passenger information required by CAPPS II as a potential violation of their privacy laws. We are currently assessing these and other challenges in the development and implementation of the CAPPS II system and expect to issue a final report on our work in early 2004. Checked baggage represents a significant security concern, as explosive devices in baggage can, and have, been placed in aircraft holds. ATSA required screening of all checked baggage on commercial aircraft by December 31, 2002, using explosive detection systems to electronically scan baggage for explosives. According to TSA, electronic screening can be accomplished by bulk explosives detection systems (EDS) or Explosives Trace Detection (ETD) systems. However, TSA faced challenges in meeting the mandated implementation date. First, the production capabilities of EDS manufacturers were insufficient to produce the number of units needed. Additionally, according to TSA, it was not possible to undertake all of the airport modifications necessary to accommodate the EDS equipment in each airport’s baggage handling area. In order to ensure that all checked baggage is screened, TSA established a program that uses alternative measures, including explosives sniffing dogs, positive passenger bag match, and physical hand searches at airports where sufficient EDS or ETD technology is not available. TSA was granted an extension for screening all checked baggage electronically, using explosives detection systems, until December 31, 2003. Although TSA has made progress in implementing EDS technology at more airports, it has reported that it will not meet the revised mandate for 100 percent electronic screening of all checked baggage. Specifically, as of October 2003, TSA reported that it will not meet the deadline for electronic screening by December 31, 2003, at five airports. Airport representatives with whom we spoke expressed concern that there has not been enough time to produce, install, and integrate all of the systems required to meet the deadline. In addition to fielding the EDS systems at airports, difficulties exist in integrating these systems into airport baggage handling systems. For those airports that have installed EDS equipment, many have been located in airport lobbies as stand-alone systems. The chief drawback of stand-alone systems is that because of their size and weight there is a limit to the number of units that can be placed in airport lobbies, and numerous screeners are required to handle the checked bags because each bag must be physically conveyed to the EDS machines and then moved back to the conveyor system for transport to the baggage handling room in the air terminal. Some airports are in the process of integrating the EDS equipment in-line with the conveyor belts that transport baggage from the ticket counter to the baggage handling area; however, the reconfiguring of airports for in-line checked baggage screening can be extensive and costly. TSA has reported that in-line EDS equipment installation costs range from $1 million to $3 million per piece of equipment. In February 2003, we identified letters of intent as a funding option that has been successfully used to leverage private sources of funding. TSA has since written letters of intent covering seven airports promising multiyear financial support totaling over $770 million for in-line integration of EDS equipment. Further, TSA officials have stated that they have identified 25 to 35 airports as candidates for further letters of intent pending Congressional authorization of funding. We are examining TSA’s baggage screening program, including its issuance of letters of intent, in an ongoing assignment. Prior to September 2001, work performed by GAO, and others, highlighted the vulnerabilities in controls for limiting access to secure airport areas. In one report, we noted that GAO special agents were able to use fictitious law enforcement badges and credentials to gain access to secure areas, bypass security checkpoints, and walk unescorted to aircraft departure gates. The agents, who had been issued tickets and boarding passes, could have carried weapons, explosives, or other dangerous objects onto aircraft. Concerns over the adequacy of the vetting process for airport workers who have unescorted access to secure airport areas have also arisen, in part, as a result of federal agency airport security sweeps that uncovered hundreds of instances in which airport workers lied about their criminal history, or immigration status, or provided false or inaccurate Social Security numbers on their application for security clearances to obtain employment. ATSA contains provisions to improve perimeter access security at the nation’s airports and strengthen background checks for employees working in secure airport areas, and TSA has made some progress in this area. For example, federal mandates were issued to strengthen airport perimeter security by limiting the number of airport access points, and they require random screening of individuals, vehicles, and property before entry at the remaining perimeter access points. Further, TSA made criminal history checks mandatory for employees with access to secure or sterile airport areas. To date, TSA has conducted approximately 1 million of these checks. TSA also has plans to develop a pilot airport security program and is reviewing security technologies in the areas of biometrics access control identification systems (i.e., fingerprints or iris scans), anti- piggybacking technologies (to prevent more than one employee from entering a secure area at a time), and video monitoring systems for perimeter security. TSA solicited commercial airport participation in the program. It is currently reviewing information from interested airports and plans to select 20 airports for the program. Although progress has been made, challenges remain with perimeter security and access controls at commercial airports. Specifically, ATSA contains numerous requirements for strengthening perimeter security and access controls, some of which contained deadlines, which TSA is working to meet. In addition, a significant concern is the possibility of terrorists using shoulder-fired portable missiles from locations near the airport. We reported in June 2003 that airport operators have increased their patrols of airport perimeters since September 2001, but industry officials stated that they do not have enough resources to completely protect against missile attacks. A number of technologies could be used to secure and monitor airport perimeters, including barriers, motion sensors, and closed-circuit television. Airport representatives have cautioned that as security enhancements are made to airport perimeters, it will be important for TSA to coordinate with the Federal Aviation Administration and the airport operators to ensure that any enhancements do not pose safety risks for aircraft. To further examine these threats and challenges, we have ongoing work assessing TSA’s progress in meeting ATSA provisions related to improving perimeter security, access controls, and background checks for airport employees and other individuals with access to secure areas of the airport, as well as the nature and extent of the threat from shoulder-fired missiles. As we and the Department of Transportation’s Inspector General have reported, vulnerabilities exist in ensuring the security of cargo carried aboard commercial passenger and all-cargo aircraft. TSA has reported that an estimated 12.5 million tons of cargo are transported each year—9.7 million tons on all-cargo planes and 2.8 million tons on passenger planes. Potential security risks are associated with the transport of air cargo— including the introduction of undetected explosive and incendiary devices in cargo placed aboard aircraft. To reduce these risks, ATSA requires that all cargo carried aboard commercial passenger aircraft be screened and that TSA have a system in place as soon as practicable to screen, inspect, or otherwise ensure the security of cargo on all-cargo aircraft. Despite these requirements, it has been reported that less than 5 percent of cargo placed on passenger airplanes is physically screened. TSA’s primary approach to ensuring air cargo security and safety is to ensure compliance with the “known shipper” program—which allows shippers that have established business histories with air carriers or freight forwarders to ship cargo on planes. However, we and the Department of Transportation’s Inspector General have identified weaknesses in the known shipper program and in TSA’s procedures for approving freight forwarders, such as possible tampering with freight at various handoff points before it is loaded into an aircraft. Since September 2001, TSA has taken a number of actions to enhance cargo security, such as implementing a database of known shippers in October 2002. The database is the first phase in developing a cargo profiling system similar to the Computer-Assisted Passenger Prescreening System. However, in December 2002, we reported that additional operational and technological measures, such as checking the identity of individuals making cargo deliveries, have the potential to improve air cargo security in the near term. We further reported that TSA lacks a comprehensive plan with long-term goals and performance targets for cargo security, time frames for completing security improvements, and risk-based criteria for prioritizing actions to achieve those goals. Accordingly, we recommended that TSA develop a comprehensive plan for air cargo security that incorporates a risk management approach, includes a list of security priorities, and sets deadlines for completing actions. TSA agreed with this recommendation and expects to develop such a plan by the end of 2003. It will be important that this plan include a timetable for implementation to help ensure that vulnerabilities in this area are reduced. Since September 2001, TSA has taken limited action to improve general aviation security, leaving general aviation far more open and potentially vulnerable than commercial aviation. General aviation is vulnerable because general aviation pilots and passengers are not screened before takeoff and the contents of general aviation planes are not screened at any point. General aviation includes more than 200,000 privately owned airplanes, which are located in every state at more than 19,000 airports.More than 550 of these airports also provide commercial service. In the last 5 years, about 70 aircraft have been stolen from general aviation airports, indicating a potential weakness that could be exploited by terrorists. This vulnerability was demonstrated in January 2002, when a teenage flight student stole and crashed a single-engine airplane into a Tampa, Florida skyscraper. Moreover, general aviation aircraft could be used in other types of terrorist acts. It was reported that the September 11th hijackers researched the use of crop dusters to spread biological or chemical agents. We reported in September 2003 that TSA chartered a working group on general aviation within the existing Aviation Security Advisory Committee. The working group consists of industry stakeholders and is designed to identify and recommend actions to close potential security gaps in general aviation. On October 1, 2003, the working group issued a report that included a number of recommendations for general aviation airport operators’ voluntary use in evaluating airports’ security requirements. These recommendations are both broad in scope and generic in their application, with the intent that every general aviation airport and landing facility operators may use them to evaluate that facility’s physical security, procedures, infrastructure, and resources. TSA is taking some additional action to strengthen security at general aviation airports, including developing a risk-based self-assessment tool for general aviation airports to use in identifying security concerns. We have ongoing work that is examining general aviation security in further detail. TSA faces two key funding and accountability challenges in securing the commercial aviation system: (1) paying for increased aviation security and (2) ensuring that these costs are controlled. The costs associated with the equipment and personnel needed to screen passengers and their baggage alone are huge. The Department of Homeland Security appropriation includes $3.7 billion for aviation security for fiscal year 2004, with about $1.8 billion for passenger screening and $1.3 billion for baggage screening. ATSA created a passenger security fee to pay for the costs of aviation security, but the fee has not generated enough money to do so. The Department of Transportation’s Inspector General reported that the security fees are estimated to generate only about $1.7 billion during fiscal year 2004. A major funding challenge is paying for the purchase and installation of the remaining explosives detection systems, including integration into airport baggage-handling systems. Integrating the equipment with the baggage-handling systems is expected to be costly because it will require major facility modifications. For example, modifications needed to integrate the equipment at Boston’s Logan International Airport are estimated to cost $146 million. Modifications for Dallas/Fort Worth International Airport are estimated to cost $193 million. According to TSA and the Department of Transportation’s Inspector General, the cost of integrating the equipment nationwide could be $3 billion. A key question that must be addressed is how to pay for these installation costs. The Federal Aviation Administration’s Airport Improvement Program (AIP) and passenger facility charges have been eligible sources for funding this work. During fiscal year 2002, AIP grant funds totaling $561 million were used for terminal modifications to enhance security. However, using these funds for security reduced the funding available for other airport development and rehabilitation projects. To provide financial assistance to airports for security-related capital investments, such as the installation of explosives detection equipment, proposed aviation reauthorization legislation would establish an aviation security capital fund that would authorize $2 billion over the next 4 years. The funding would be made available to airports in letters of intent, and large and medium hub airports would be expected to provide a match of 10 percent of a project’s costs. A 5 percent match would be required for all other airports. In February 2003, we identified letters of intent as a funding option that has been successfully used to leverage private sources of funding. TSA has since signed letters of intent covering seven airports—Boston Logan, Dallas/Fort Worth, Denver, Los Angeles, McCarran (Las Vegas), Ontario (California), and Seattle/Tacoma international airports. Under the agreements, TSA will pay 75 percent of the cost of integrating the explosives detection equipment into the baggage-handling systems. The payments will stretch out over 3 to 4 years. TSA officials have identified more airports that would be candidates for similar agreements. Another challenge is ensuring continued investment in transportation research and development. For fiscal year 2003, TSA was appropriated about $110 million for research and development, of which $75 million was designated for the next-generation explosives detection systems. However, TSA proposed to reprogram $61.2 million of these funds to be used for other purposes, leaving about $12.7 million to be spent on research and development in that year. This proposed reprogramming could limit TSA’s ability to sustain and strengthen aviation security by continuing to invest in research and development for more effective equipment to screen passengers, their carry-on and checked baggage, and cargo. In ongoing work, we are examining the nature and scope of research and development work by TSA and the Department of Homeland Security, including their strategy for accelerating the development of transportation security technologies. As it organizes itself to protect the nation’s transportation system, TSA faces the challenge of strategically managing its workforce of about 60,000 people—more than 80 percent of whom are passenger and baggage screeners. Additionally, over the next several years, TSA faces the challenge of sizing and managing this workforce as efficiency is improved with new security-enhancing technologies, processes, and procedures. For example, as explosives detection systems are integrated with baggage- handling systems, the use of more labor-intensive screening methods, such as trace detection techniques and manual bag searches, can be reduced. Other planned security enhancements, such as CAPPS II and the registered traveler program, also have the potential to make screening more efficient. Further, if airports opt out of the federal screener program and use their own or contract employees to provide screening instead of TSA screeners, a significant impact on TSA staffing could occur. To assist agencies in managing their human capital more strategically, we have developed a model that identifies cornerstones and related critical success factors that agencies should apply and steps they can take. Our model is designed to help agency leaders effectively lead and manage their people and integrate human capital considerations into daily decision making and the program results they seek to achieve. In January 2003, we reported that TSA was addressing some critical human capital success factors by using a wide range of tools available for hiring, and beginning to link individual performance to organizational goals. However, concerns remain about the size and training of that workforce, the adequacy of the initial background checks for screeners, and TSA’s progress in setting up a performance management system. TSA is currently developing a human capital strategy, which it expects to be completed by the end of this year. TSA has proposed cutting the screener workforce by an additional 3,000 during fiscal year 2004. This planned reduction has raised concerns about passenger delays at airports and has led TSA to begin hiring part-time screeners to make more flexible and efficient use of its workforce. In addition, TSA used an abbreviated background check process to hire and deploy enough screeners to meet ATSA’s screening deadlines during 2002. After obtaining additional background information, TSA terminated the employment of some of these screeners. TSA reported 1,208 terminations as of May 31, 2003, that it ascribed to a variety of reasons, including criminal offenses and failures to pass alcohol and drug tests. Furthermore, the national media have reported allegations of operational and management control problems that emerged with the expansion of the Federal Air Marshal Service, including inadequate background checks and training, uneven scheduling, and inadequate policies and procedures. We reported in January 2003 that TSA had taken the initial steps in establishing a performance management system linked to organizational goals. Such a system will be critical for TSA to motivate and manage staff, ensure the quality of screeners’ performance, and, ultimately, restore public confidence in air travel. In ongoing work, we are examining the effectiveness of TSA’s efforts to train, equip, and supervise passenger screeners, and we are assessing the effects of expansion on the Federal Air Marshal Service. As TSA moves forward in addressing aviation security concerns, it needs the information and tools necessary to ensure that its efforts are appropriately focused, strategically sound, and achieving expected results. Without knowledge about the effectiveness of its programs and a process for prioritizing planned security initiatives, TSA and the public have little assurance regarding the level of security provided, and whether TSA is using its resources to maximize security benefits. Additionally, as TSA implements new security initiatives and addresses associated challenges, measuring program effectiveness and prioritizing efforts will help it focus on the areas of greatest importance. We are encouraged that TSA is undertaking efforts to develop the information and tools needed to measure its performance and focus its efforts on those areas of greatest need. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512-8777. Individuals making key contributions to this testimony include Mike Bollinger, Lisa Brown, Jack Schulze, Maria Strudwick, and Susan Zimmerman. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003 Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Aviation Security: Measures Needed to Improve Security of Pilot Certification Process. GAO-03-248NI. Washington, D.C.: February 3, 2003 (NOT FOR PUBLIC DISSEMINATION). Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-286NI. Washington, D.C.: December 20, 2002 (NOT FOR PUBLIC DISSEMINATION). Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Vulnerability of Commercial Aviation to Attacks by Terrorists Using Dangerous Goods. GAO-03-30C. Washington, D.C.: December 3, 2002 Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-03-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Deployment and Capabilities of Explosive Detection Equipment. GAO-02-713C. Washington, D.C.: June 20, 2002 (CLASSIFIED). Aviation Security: Information on Vulnerabilities in the Nation’s Air Transportation System. GAO-01-1164T. Washington, D.C.: September 26, 2001 (NOT FOR PUBLIC DISSEMINATION). Aviation Security: Information on the Nation’s Air Transportation System Vulnerabilities. GAO-01-1174T. Washington, D.C.: September 26, 2001 (NOT FOR PUBLIC DISSEMINATION). Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. Responses of Federal Agencies and Airports We Surveyed about Access Security Improvements. GAO-01-1069R. Washington, D.C.: August 31, 2001. Responses of Federal Agencies and Airports We Surveyed about Access Security Improvements. GAO-01-1068R. Washington, D.C.: August 31, 2001 (RESTRICTED). FAA Computer Security: Recommendations to Address Continuing Weaknesses. GAO-01-171. Washington, D.C.: December 6, 2000. Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations. GAO/RCED-00-181. Washington, D.C.: September 29, 2000. FAA Computer Security: Actions Needed to Address Critical Weaknesses That Jeopardize Aviation Operations. GAO/T-AIMD-00-330. Washington, D.C.: September 27, 2000. FAA Computer Security: Concerns Remain due to Personnel and Other Continuing Weaknesses. GAO/AIMD-00-252. Washington, D.C.: August 16, 2000. Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance. GAO/RCED-00-75. Washington, D.C.: June 28, 2000. Aviation Security: Screeners Continue to Have Serious Problems Detecting Dangerous Objects. GAO/RCED-00-159. Washington, D.C.: June 22, 2000 (NOT FOR PUBLIC DISSEMINATION). Security: Breaches at Federal Agencies and Airports. GAO-OSI-00-10. Washington, D.C.: May 25, 2000. Aviation Security: Screener Performance in Detecting Dangerous Objects during FAA Testing Is Not Adequate. GAO/T-RCED-00-143. Washington, D.C.: April 6, 2000 (NOT FOR PUBLIC DISSEMINATION). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
It has been 2 years since the attacks of September 11,2001, exposed vulnerabilities in the nation's aviation system. Since then, billions of dollars have been spent on a wide range of initiatives designed to enhance the security of commercial aviation. However, vulnerabilities in aviation security continue to exist. As a result, questions have been raised regarding the effectiveness of established initiatives in protecting commercial aircraft from threat objects, and whether additional measures are needed to further enhance security. Accordingly, GAO was asked to describe the Transportation Security Administration's (TSA) efforts to (1) measure the effectiveness of its aviation security initiatives, particularly its passenger screening program; (2) implement a risk management approach to prioritize efforts and focus resources; and (3) address key challenges to further enhance aviation security. TSA has implemented numerous initiatives designed to enhance aviation security, but has collected limited information on the effectiveness of these initiatives in protecting commercial aircraft. Our recent work on passenger screening found that little testing or other data exist that measures the performance of screeners in detecting threat objects. However, TSA is taking steps to collect data on the effectiveness of its security initiatives, including developing a 5-year performance plan detailing numerous performance measures, as well as implementing several efforts to collect performance data on the effectiveness of passenger screening--such as fielding the Threat Image Projection System and increasing screener testing. TSA has developed a risk management approach to prioritize efforts, assess threats, and focus resources related to its aviation security initiatives as we previously recommended, but has not yet fully implemented this approach. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions. TSA is developing and implementing both a criticality and a vulnerability assessment tool to provide a basis for risk-based decision-making. TSA is currently using some components of these tools and plans to fully implement its risk management approach by the summer 2004. TSA faces a number of programmatic and management challenges as it continues to enhance aviation security. These include the implementation of the new computer-assisted passenger prescreening system, as well as strengthening baggage screening, airport perimeter and access controls, air cargo, and general aviation security. TSA also must manage the costs associated with aviation security and address human capital challenges, such as sizing its workforce as efficiency is improved with security-enhancing technologies--including the integration of explosive detection systems into in-line baggage-handling systems. Further challenges in sizing its workforce may be encountered if airports are granted permission to opt out of using federal screeners.
GAO remains one of the best investments in the federal government, and our dedicated staff continues to deliver high quality results. In FY 2013 alone, GAO provided services that spanned the broad range of federal programs and activities. We received requests for our work from 95 percent of the standing committees of Congress and almost two-thirds of their subcommittees. We reviewed a wide range of government programs and operations including those that are at high risk for fraud, waste, abuse, and mismanagement. GAO also reviewed agencies’ budgets as requested to help support congressional decision-making. Last year, our work yielded significant results across the government, including $51.5 billion in financial benefits—a return of about $100 for every dollar invested in GAO. Also, in FY 2013, we issued 709 reports and made 1,430 new recommendations. The findings of our work were often cited in House and Senate deliberations and committee reports to support congressional action, including improving federal programs on our High Risk list; addressing overlap, duplication, and fragmentation; and assessing defense, border security and immigration issues. Our findings also supported the Bipartisan Budget Act of 2013, in areas such as aviation security fees, unemployment insurance, improper payments to inmates, the strategic petroleum reserve, and the contractor compensation cap. Senior GAO officials also provided testimony 114 times before 60 Committees or Subcommittees on a wide range of issues that touched virtually all major federal agencies. A list of selected topics addressed is included in Appendix I. GAO’s findings and recommendations produce measurable financial benefits through Congressional action or agency implementation. Examples of FY 2013 financial benefits resulting from congressional or federal agency implementation of GAO recommendations include: $8.7 billion from reducing procurement quantities of the Joint Strike Fighter Program: DOD decreased near-term procurement quantities in three successive budget submissions to lessen concurrency and the associated cost risks in light of our numerous recommendations citing the F-35 Joint Strike Fighter program’s very aggressive and risky acquisition strategy, including substantial overlap among development, testing, and production activities. $2.6 billion from revising the approach for the Navy’s Next Generation Enterprise Network (NGEN) Acquisition: Our recommendations led Navy to revise its NGEN acquisition strategy— which was riskier and potentially costlier than other alternatives identified due to a higher number of contractual relationships—thus significantly reducing program costs between 2013 and 2017. $2.5 billion from eliminating seller-funded payment assistance for FHA-insured mortgages: The Department of Housing and Urban Development and Congress took steps to prohibit seller-funded down payment assistance, citing our findings that losses associated with those loans had substantially higher delinquency and insurance claim rates than similar loans without such assistance, and were contributing to the Federal Housing Administration’s deteriorating financial performance. $2.3 billion from consolidating U.S. Forces stationed in Europe: DOD removed two brigade combat teams and support units from Europe, allowing it to further consolidate and close facilities, based in part on our work showing significant costs related to maintaining permanent Army forces in Europe and our recommendations that DOD identify alternatives that might lead to savings. $1.3 billion through improved tax compliance: Our recommendations on the use of information reporting to reduce the tax gap contributed to legislation requiring banks and others to report income that merchants receive through credit cards, third-party networks, and other means to help IRS verify information reported on merchants’ income tax returns. The estimated increased revenue through improved tax compliance is expected over the provision’s first 3 fiscal years. GAO has generated recommendations that save resources, increase government revenue, improve the accountability, operations, and services of government agencies, increase the effectiveness of federal spending as well as provide other benefits. Since FY 2003, GAO’s work has resulted in substantial financial and other benefits for the American people, including: over ½ trillion dollars in financial benefits; about 14,500 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government; and about 12,000 reports, testimony, and other GAO products that included over 22,000 recommendations. In FY 2013, GAO also contributed to 1,314 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government. Thirty six percent of these benefits are related to business process and management, 31 percent are related to public safety and security, 17 percent are related to program efficiency and effectiveness, 8 percent are related to acquisition and contract management, 5 percent are related to public insurance and benefits, and 3 percent are related to tax law administration. Examples include: enhancing coordination between DOD and the Social Security Administration (SSA) on the more timely delivery of military medical records through electronic transfer; improving Veterans Affairs (VA) oversight of its medical equipment and supply purchasing; increasing collaboration between the Army and Veterans Affairs through a joint working group to improve management of military cemeteries and help eliminate burial errors and other past problems; updating Federal Emergency Management Administration (FEMA) National Flood Insurance Program contract monitoring policies to reduce the likelihood that contractor performance problems would go unnoticed; and establishing National Oceanic and Atmospheric Administration policies outlining the processes, roles and responsibilities for transitioning tsunami research into operations at tsunami warning centers. In FY 2013 GAO issued its third annual report on overlap, duplication, and fragmentation. In it, we identified 31 new areas where agencies may be able to achieve greater efficiency or effectiveness. Within these 31 areas, we identified 81 actions that the executive branch and Congress could take to reduce fragmentation, overlap, and duplication, as well as other cost savings and revenue enhancement opportunities. This work identifies opportunities for the federal government to save billions of dollars. We also maintain a scorecard and action tracker on our external website where Congress, federal agencies, and the public can monitor progress in addressing our findings. Federal agencies and Congress have made some progress in addressing the 131 areas we identified and taking the 300 actions that we recommended in our 2011 and 2012 reports. In February 2013 GAO issued the biennial update of our High Risk report, which focuses attention on government operations that are at high risk of fraud, waste, abuse, and mismanagement, or need transformation to address economy, efficiency, or effectiveness challenges. This report, which will be updated in 2015, offers solutions to 30 identified high-risk problems and the potential to save billions of dollars, improve service to the public, and strengthen the performance and accountability of the U.S. government. Our 2013 High Risk work produced 164 reports, 35 testimonies, $17 billion in financial benefits, and 411 program and operational benefits. The major cross-cutting High Risk program areas identified as of September 2013 range from transforming DOD program management and managing federal contracting more effectively, to assessing the efficiency and effectiveness of tax law administration and modernizing and safeguarding insurance and benefit programs. The complete list of high-risk areas is shown on Appendix II. Details on each high risk area can be found at http://www.gao.gov/highrisk/overview. GAO’s FY 2014 budget request sought statutory authority for a new electronic docketing system to be funded by a filing fee collected from companies filing bid protests. The sole purpose of the filing fee would be to offset the cost of developing, implementing, and maintaining the system. We appreciate that the Consolidated Appropriations Act, 2014, directed GAO to develop an electronic filing and document dissemination system under which persons may electronically file bid protests and documents may be electronically disseminated to the parties. GAO is making progress in establishing the electronic protest docketing system. We have convened an interdisciplinary team of experts within GAO to examine matters such as technical requirements, the potential for commercially available systems, fee structure, cost-benefit analysis, and outreach to stakeholders, including representatives from the small business community. GAO will be reporting regularly to the House and Senate Committees on Appropriations on its progress in implementing the system. In September 2013, GAO launched the Watchdog website, which provides information exclusively to Members and congressional staff through the House and Senate intranets. The new site is designed to provide a more interactive interface for Members and their staff to request our assistance and to access our ongoing work. In addition, Watchdog can help users quickly find GAO’s issued reports and legal decisions as well as key contact information. In December 2013, Members and their staff were invited to comment on our draft Strategic Plan for Serving Congress in FYs 2014-2019. The draft plan was issued in February 2014 and outlines our proposed goals and strategies for supporting Congress’s top priorities. Our strategic plan framework (Appendix III) summarizes the global trends, as well as the strategic goals and objectives that guide our work. GAO’s strategic goals and objectives are shown in Figure 1. The draft strategic plan also summarizes the trends shaping the United States and its place in the world. The plan reflects the areas of work we plan to undertake, including science and technology, weapons systems, the environment, and energy. We also will increase collaboration with other national audit offices to get a better handle on global issues that directly affect the United States, including international financial markets; food safety; and medical and pharmaceutical products. These trends include: U.S. National Security Interests; Fiscal Sustainability and Challenges; Global Interdependence and Multinational Cooperation; Science and Technology; Communication Networks and Information Technology; Shifting Roles in Governance and Government; and Demographic and Societal Changes. In the upcoming decade, for example, the US will face demographic changes that will have significant fiscal impacts both on the federal budget and the economy. The number of baby boomers turning 65 is projected to grow from an average of about 7,600 per day in 2011, to more than 11,600 per day in 2025, driving spending for major health and retirement programs. To ensure the updated strategic plan reflects the needs of Congress and the nation, we have solicited comments from stakeholders in addition to Congress, including GAO advisory entities, the Congressional Budget Office, and the Congressional Research Service. To manage our congressional workload, we continue to take steps to ensure our work supports congressional legislative and oversight priorities and focuses on areas where there is the greatest potential for results such as cost savings and improved government performance. Ways that we actively work with congressional committees in advance of new statutory mandates include 1) identifying mandates real time as bills are introduced; 2) participating in ongoing discussions with congressional staff; and 3) collaborating to ensure that the work is properly scoped and is consistent with the committee’s highest priorities. In FY 2013, 35 percent of our audit resources were devoted to mandates and 61 percent to congressional requests. I have met with the chairs and ranking members of many of the standing committees and their subcommittees to hear firsthand feedback on our performance, as well as highlight the need to prioritize requests for our services to maximize the return on investment. GAO also appreciates Congress’s assistance in repealing or revising statutory mandates that are either outdated or need to be revised. This helps streamline GAO’s workload and ensure we are better able to meet current congressional priorities. During the second session of the 112th Congress, based on our input, 16 of GAO’s mandated reporting requirements were revised or repealed because over time they had lost relevance or usefulness. In addition, GAO worked with responsible committees to have 6 more mandates repealed or revised as part of the 2014 National Defense Authorization Act. GAO has identified 11 additional mandates for revision or repeal and is currently working with the appropriate committees to implement these changes. For example, our request includes language to repeal a requirement for GAO to conduct bimonthly reviews of state and local use of Recovery Act funds. As the vast majority of Recovery Act funds have been spent, GAO’s reviews in this area are providing diminishing returns for Congress. GAO is seeking authority to establish a Center for Audit Excellence to improve domestic and international auditing capabilities. The Center also will provide an important tool for promoting good governance, transparency and accountability. There is a worldwide demand for an organization with GAO’s expertise and stature to assume a greater leadership role in developing institutional capacity in other audit offices and provide training and technical assistance throughout the domestic and international auditing communities. The proposed Center would operate on a fee-basis, generating revenue to sustain its ongoing operation, including the cost of personnel and instructors. The Center would be primarily staffed with retired GAO and other auditors, and thus, would not detract from or impact the service GAO provides to Congress. In a similar vein, to provide staff from other federal agencies with developmental experiences, GAO is requesting authority to accept staff from other agencies on a non-reimbursable basis, who can learn about GAO’s work. This would allow people to develop expertise and gain experience that will enhance their work at their own agencies. We take great pride in reporting that we continue to be recognized as an employer of choice, and have been consistently ranked near the top on “best places to work” lists. In 2013, we ranked third overall among mid- sized federal agencies on the Partnership for Public Service’s “Best Places to Work” list, and again ranked number one in our support of diversity. Also, in November 2013, Washingtonian Magazine named us as one of the “50 Great Places to Work” in the Washington, D.C. region among public or private entities. In addition, earlier this year, O.C. Tanner, a company that develops employee recognition programs, cited us in its article, “Top 10 Coolest Companies to Work for in Washington, D.C.” Our management continues to work with our union (IFPTE, Local 1921), the Employee Advisory Council, and the Diversity Advisory Council to make GAO a preferred place to work. GAO’s FY 2015 budget request will preserve staff capacity and continue critical infrastructure investments. Offsetting receipts and reimbursements primarily from program and financial audits and rental income totaling $30.9 million are expected in FY 2015. The requested resources provide the funds necessary to ensure that GAO can meet the highest priority needs of Congress and produce results to help the federal government deal effectively with its serious fiscal and other challenges. A summary of GAO’s appropriations for our FY 2010 baseline and FYs 2013 to 2015 is shown in Figure 2. The requested funding supports a staffing level of 2,945 FTEs, and provides funding for mandatory pay costs, staff recognition and benefits programs, and activities to support congressional engagements and operations. These funds are essential to ensure GAO can address succession planning challenges, provide staff meaningful benefits and appropriate resources, and compete with other agencies, nonprofit institutions, and private firms who offer these benefits to the talent GAO seeks. In order to address the priorities of Congress, GAO needs a talented, diverse, high-performing, knowledgeable workforce. However, a significant proportion of our employees are currently retirement eligible, including 34 percent of our executive leadership and 21 percent of our supervisory analysts. Therefore, workforce and succession planning remain a priority for GAO. Moreover, for the first time in several years our budget allows us to replenish the much needed pipeline of entry level and experienced analysts to meet future workload challenges. In FY 2014, through a targeted recruiting GAO plans to hire entry-level staff and student interns, boosting our staff capacity for the first time in 3 years to 2,945 FTE. This will allow GAO to reverse the downward trend in our FTEs and achieve some progress in reaching our optimal staffing level of 3,250 FTE, and develop a talent pool for the future. Our FY 2015 budget request seeks funding to maintain the 2,945 FTE level. In FY 2015, pending final OPM guidance, we also plan to implement a phased retirement program to incentivize potential retirement eligible staff to remain with GAO and assist in mentoring and sharing knowledge with staff. Efforts to address challenges related to GAO’s internal operations primarily relate to our engagement efficiency, information technology and building infrastructure needs. To better serve Congress and the public, we expanded our presence in digital and social media, releasing GAO iPhone and Android applications, and launching streaming video web chats with the public. During the past year, 7,600 additional people began receiving our reports and legal decisions through our Twitter feed. More than 26,600 people now get our reports, testimonies, and legal decisions daily on Twitter. GAO remains focused on improving the efficiency of our engagements through streamlining or standardizing processes without sacrificing quality. In FYs 2012 and 2013, we continued our improvements in this area. For example, with active involvement from GAO’s managing directors, we identified changes to key steps and decision points in our engagement process and now have a revised engagement process that we began implementing on a pilot basis in January 2014. We also piloted and revised a tool to help teams better estimate expected staff days required for engagements. In FY 2014, we plan to implement a series of process changes that will transform the management of engagements, the use of resources, and message communication. More Efficient Content Creation, Review, and Publication GAO will strive to dramatically improve the efficiency of our content creation and management processes by standardizing, automating, and streamlining the currently cumbersome and manually intensive processes for creating, fact-checking, and publishing GAO products. In FY 2014, we plan to request proposals to acquire a technical solution and phase implementation in FYs 2014 and 2015. The proposed system will automate document routing and approvals, incorporate management and quality assurance steps, and generate required documentation. To ensure our message is available to both our clients and the public, the proposed system capability will also enable GAO to routinely publish content on GAO.gov, GAO’s mobile site, and various social media platforms. Greater Transparency of Engagement Information To promote transparency, increase management capabilities, and reduce duplicate data entry and costs, in FY 2014 GAO will begin implementing a modernized, one-stop engagement management system. This system automates key business rules and decision points, improves resource management, eliminates rework, and provides increased visibility for all participants. In FY 2015, we will retire legacy databases as the new system becomes fully operational. The FY 2015 budget also provides funds to maintain our information technology (IT) systems, which are a critical element in our goal to maintain efficient and effective business operations and to provide the data needed to inform timely management decisions. Improvements to our aging IT infrastructure will allow GAO to further streamline business operations, reduce redundant efforts, increase staff efficiency and productivity, improve access to information, and enhance our technology infrastructure to support an array of engagement management, human capital, and financial management systems. GAO also plans to continue upgrading aging building systems to ensure more efficient operations and security. To support these requirements our FY 2015 budget request includes resources to: begin upgrading the heating, ventilation, and air conditioning system to increase energy efficiency and reliability; repair items identified in our long-range asset management plan, such as the water heater, chiller plant, and cooling fans; enhance continuity planning and emergency preparedness address bomb blast impact mitigation efforts. In conclusion, GAO values the opportunity to provide Congress and the nation with timely, insightful analysis. The FY 2015 budget requests the resources to ensure that we can continue to address the highest priorities of Congress. Our request seeks an increase to maintain our staffing level and provide employees with the appropriate resources and support needed to effectively serve Congress. The funding level will also allow us to continue efforts to promote operational efficiency, and begin addressing long-deferred investments and maintenance. This concludes my prepared statement. I appreciate, as always, your continued support and careful consideration of our budget. I look forward to discussing our FY 2015 request with you. Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks (new) Management of Federal Oil and Gas Resources Modernizing the U.S. Financial Regulatory System and Federal Role in Housing Finance Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability Funding the Nation’s Surface Transportation System Strategic Human Capital Management Transforming DOD Program Management DOD Approach to Business Transformation DOD Business Systems Modernization DOD Support Infrastructure Management DOD Financial Management DOD Supply Chain Management DOD Weapon Systems Acquisition Ensuring Public Safety and Security Mitigating Gaps in Weather Satellite Data (new) Appendix III: GAO’s Strategic Plan Framework This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO's mission is to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the benefit of the American people. GAO provides nonpartisan, objective, and reliable information to Congress, federal agencies, and to the public and recommends improvements, when appropriate, across the full breadth and scope of the federal government's responsibilities. GAO's work supports a broad range of interests throughout Congress. In FY 2013, GAO received requests for our work from 95 percent of the standing committees of Congress and almost two-thirds of their subcommittees. Additionally, senior GAO officials testified at 114 hearings on national and international issues, before 60 committees and subcommittees that touch on virtually all major Federal Agencies. GAO remains one of the best investments in the federal government, and GAO's dedicated staff continues to deliver high quality results. In FY 2013 alone, GAO's work yielded $51.5 billion in financial benefits--a return of about $100 for every dollar invested in GAO. Since FY 2003, GAO's work has resulted in: over 1/2 trillion dollars in financial benefits; and about 14,500 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government. GAO is requesting a budget of $525.1 million to preserve its staff capacity and continue critical information technology and building infrastructure investments. GAO's fiscal year (FY) 2015 budget request of $525.1 million seeks an increase of 3.9 percent to maintain staff capacity as well as continue necessary maintenance and improvements to our information technology (IT) and building infrastructure. Additionally, receipts and reimbursements, primarily from program and financial audits, and rental income, totaling $30.9 million are expected in FY 2015. GAO recently issued our draft Strategic Plan for Serving Congress in FYs 2014-2019. The plan outlines our proposed goals and strategies for supporting Congress's top priority. I also have met with the Chairs and Ranking Members of many of the standing committees and their subcommittees to hear firsthand feedback on our performance, as well as prioritize requests for our services to maximize the return on investment. In order to address Congressional priorities, and fulfill GAO's mission, a talented, diverse, high-performing, knowledgeable workforce is essential. Workforce and succession planning remain a priority for GAO. A significant proportion of our employees are currently retirement eligible, including 34 percent of our executive leadership and 21 percent of our supervisory analysts. In 2014, through a targeted recruiting strategy to address critical skills gaps, GAO plans to boost our employment level for the first time in 3 years to 2,945 Full Time Equivalents (FTE). The requested FY 2015 funding level will preserve strides planned for FY 2014 to increase our staff capacity. In conjunction with the ongoing recruiting efforts and planning, we will revive our intern program and hire and train an increased number of entry level employees. This will reverse the downward staffing trajectory, develop a talented cadre of analyst and leaders for the future, achieve progress in reaching an optimal FTE level of 3,250 FTE, and assist GAO in meeting the high priority needs of Congress. We also take great pride in reporting that we continue to be recognized as an employer of choice, and have been consistently ranked near the top on "best places to work" lists. Improvements to our aging IT infrastructure will allow GAO to further streamline business operations, increase staff efficiency and productivity, as well as improve access to information. Planned investments in IT will address deferred upgrades and enhance our technology infrastructure to support an array of engagement management, human capital, and financial management systems. We also plan to continue upgrading aging building systems to ensure more efficient operations and security. Areas of focus include, increasing the energy efficiency and reliability of the heating, ventilation, and air conditioning system; enhancing continuity planning and emergency preparedness capabilities; and addressing bomb blast impact mitigation efforts
Energy, and specifically petroleum-based fuel, will be a key issue facing the nation during the 21st century. The United States accounts for only 5 percent of the world’s population but about 25 percent of the world’s oil demand. The Department of Energy projects that worldwide oil demand will continue to grow, reaching 118 million barrels per day in 2030, up from 84 million barrels per day in 2005. Although countries such as China and India will generate much of this increased demand, the United States will remain the world’s largest oil consumer. World oil production has been running at near capacity in recent years to meet rising consumption, putting upward pressure on oil prices. The potential for disruptions in key oil-producing regions of the world, such as the Middle East, and the yearly threat of hurricanes in the Gulf of Mexico have also exerted upward pressure on oil prices. Crude oil prices almost tripled from 2003 through the beginning of 2008, rising from $36 a barrel to as high as $100 a barrel. In 2007, about 67 percent of the oil consumed in the United States was imported, and the increased energy dependence on other countries raises concern about international turmoil in the Middle East and elsewhere. In addition, worldwide supplies of oil from conventional sources remain uncertain. U.S. oil production peaked around 1970, and worldwide production could peak and begin to decline, although there is great uncertainty about when this might happen. Moreover, there are differences of opinion as to how long the nation can rely on petroleum- based fuel to meet the majority of its energy needs. As a result, we have previously reported that, in addition to expanding production, the United States may need to place more emphasis on demand reduction strategies as well as developing alternative or renewable energy supplies and technologies. DOD is the single largest energy consumer in the United States, and it consumes about 90 percent of the petroleum-based fuel used by the U.S. government. Jet fuel constitutes more than half of DOD’s total energy consumption. Other types of petroleum-based fuels used by DOD include marine and auto diesel. According to the Department of Defense Annual Energy Management Report for fiscal year 2006, DOD consumed approximately 4.6 billion gallons of mobility fuels in fiscal year 2006, down from 5.17 billion gallons in fiscal year 2005. However, spending on mobility fuels increased 26.5 percent, from $7.95 billion in fiscal year 2005 to $10.06 billion in fiscal year 2006. DOD attributed this cost increase to the rise in fuel prices. For example, the price of jet fuel increased from $1.70 per gallon in fiscal year 2005 to $2.34 per gallon in fiscal year 2006. Congress, in fiscal year 2006, provided DOD more than $2 billion in supplemental funds to cover increased fuel costs. In fiscal year 2007, DOD reported that the department consumed almost 4.8 billion gallons of mobility fuel and spent $9.5 billion. Although fuel costs represent less than 3 percent of the total DOD budget, they have a significant impact on the department’s operating costs. DOD has estimated that for every $10 increase in the price of a barrel of oil, DOD’s operating costs increase by approximately $1.3 billion. Fuel presents an enormous logistical burden for DOD when planning and conducting military combat operations. For current operations, the fuel logistics infrastructure requires, among other things, long truck convoys that move fuel to forward-deployed locations while exposed to the vulnerabilities of operations, such as enemy attacks (see figs. 1 and 2). Army officials have estimated that about 70 percent of the tonnage required to position its forces for battle consists of fuel and water. An armored division can use 600,000 gallons of fuel a day, and an air assault division can use 300,000 gallons a day. In addition, combat support units consume more than half of the fuel the Army uses on the battlefield. Aircraft also burn through fuel at rapid rates; a B-52H, for example, burns approximately 3,500 gallons per flight hour. Of the four military services, the Air Force consumes the greatest amount of petroleum-based fuels. DOD has existing policies and organizational responsibilities for managing energy commodities, including petroleum, natural gas, coal, and electricity, to support peacetime and wartime missions and to permit successful and efficient deployment and employment of forces. Its overarching policy directive on managing energy commodities and related services establishes policy on standardizing fuels, minimizing inventory levels, maximizing use of alternative fuel sources from host nations and commercial sources, and privatizing energy infrastructure at military installations. The Defense Energy Support Center, within the Defense Logistics Agency, finances fuel purchases through a defense working capital fund. The military services purchase fuel from the Defense Energy Support Center using funds appropriated for their operation and maintenance accounts. Various DOD components have a role in planning for fuel demand and managing fuel storage and delivery. DOD has been exploring issues surrounding its reliance on petroleum through a number of studies sponsored by various offices within OSD. In 2001, the Defense Science Board issued the results of its study on improving the fuel efficiency of weapons platforms, in response to a tasking from the Under Secretary of Defense for Acquisition, Technology, and Logistics. In 2006, the Office of the Director, Defense Research and Engineering, sponsored a study by The JASONs, an independent defense advisory group under The MITRE Corporation, to assess ways to reduce DOD’s dependence on fossil fuels. Under the sponsorship of the Office of Force Transformation and Resources, within the Office of the Under Secretary of Defense for Policy, LMI issued a 2007 report on an approach to establishing a DOD energy strategy. During the period in which we were conducting our review, the Defense Science Board, at the direction of the Under Secretary of Defense for Acquisition, Technology, and Logistics, issued a new report on DOD’s energy strategy. These studies have been supplemented by internal DOD reviews and other efforts, such as informational forums at the National Defense University, to explore fuel reduction strategies. OSD, the Joint Staff, and the military services have made efforts to reduce mobility energy demand for DOD’s forces and in weapons platforms. At the department level, OSD and the Joint Staff have several efforts under way to begin to incorporate fuel efficiency considerations in DOD’s requirements development and acquisition processes. In addition, each of the military services has its own initiatives under way to reduce mobility energy demand. The discussion that follows highlights several key efforts and is not intended to be a comprehensive listing of all fuel reduction efforts. Department officials from several offices within OSD and the Joint Staff have initiated efforts to address mobility energy demand. In 2006, OSD created the DOD Energy Security Task Force to address energy security concerns. The task force’s integrated product team, which includes representatives from the military services; defense agencies; the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Office of the Under Secretary of Defense for Policy; the Office of the Principal Deputy Under Secretary of Defense (Comptroller); the Joint Staff; and OSD’s Program Analysis and Evaluation office, typically meets each month and has formed several working groups to share information and ideas on efforts to reduce fuel demand in current and future weapons platforms. The integrated product team reports to a senior steering group, consisting of principal deputy secretaries of defense and service under secretaries and assistant secretaries. Among other activities, the task force recommended funding in fiscal year 2008 for several military service-led energy-related research and development projects, and it is monitoring their progress (see table 1). In addition to focusing on research and development initiatives, DOD has recognized a need to factor energy efficiency considerations into its acquisition process. In 2007, the Deputy Secretary of Defense included energy in DOD’s list of the top 25 transformational priorities for the department, as part of its initiative to pursue targeted acquisition reforms. Also, in April 2007, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics established a DOD policy to include the fully burdened cost of fuel—that is, the total ownership cost of buying, moving, and protecting fuel in systems during combat—for the acquisition of all tactical systems that create a demand for energy. To incorporate the fully burdened cost of energy into acquisition decisions, OSD initiated a pilot program that includes three systems: the Army and Marine Corps’ Joint Light Tactical Vehicle, the Navy’s new CG(X) cruiser, and the Air Force’s Next-Generation Long-Range Strike aircraft. To further facilitate the implementation of this policy, OSD’s Program Analysis and Evaluation office developed a methodology for assessing the fully burdened cost of fuel and completed its initial analyses of the first system, the Joint Light Tactical Vehicle, last fall. According to the DOD policy, the results of the pilot program are expected to be used as the basis for implementation across all relevant acquisition programs. In another initiative, the Joint Staff added language to its guidance in May 2007 requiring that an energy efficiency key performance parameter be selectively considered in the development of capability requirements for new systems. The guidance defines a key performance parameter as an attribute or characteristic of a system that is considered critical or essential to the development of an effective military capability. For example, a survivability key performance parameter is applicable for manned systems designed to enhance personnel survival when employed in an asymmetric threat environment. In general, a key performance parameter represents a system attribute that is so significant that failure to meet its minimum threshold could be a reason for DOD or the military services to reevaluate the concept or system or terminate the program. In response to the work conducted by the DOD Energy Security Task Force, the Joint Staff has also been directed to lead an assessment of simulator capability and capacity across the department. This effort is expected to analyze whether the increased use of simulators could substitute for live training without degrading operational capability. The study will also identify barriers to implementation and needed policy changes. The Army has begun a number of efforts to reduce mobility energy demand. These activities include undertaking initiatives to reduce fuel consumption in theater, determining the total costs of delivering fuel, and developing an Army energy strategy. The Army, through the office of the Army Rapid Equipping Force, created the Power Surety Task Force in 2006 to address a joint urgent operational needs statement from a U.S. commander in Iraq that called for alternative energy sources to reduce the amount of fuel transported to supply power generation systems at forward-deployed locations. The Power Surety Task Force aims to foster the development of projects and programs that are deployable within 18 months. Two of the Power Surety Task Force’s initiatives—foam-insulated tents and temporary biodegradable dome structures that are more efficient to heat and cool—are expected to reduce the number of generators required to produce power at forward-deployed locations. Another initiative is the development of a transportable hybrid electric power station, which uses wind, solar energy, a diesel generator, and storage batteries to provide reliable power with fewer fuel requirements. According to Army Rapid Equipping Force officials, the power station could potentially replace about half of the current generators at forward- deployed locations. Moreover, they estimated that annual savings in Iraq from some of these initiatives could be at least $1.7 billion, and that other benefits could include a reduction in the number of trucks required in supply convoys, potentially saving lives and reducing vehicle maintenance requirements. We did not validate the Army Rapid Equipping Force’s cost savings estimate. Another ongoing Army activity is its effort to determine the total costs of delivered energy for Army systems. The Army’s “Sustain the Mission Project” was started in 2004 to institutionalize a fully burdened cost methodology in the Army. The methodology uses existing Army and DOD databases, metrics, and processes to calculate the fully burdened cost of fuel and to facilitate “what if” analyses for different assumptions and scenarios. It is also aimed at enabling decision makers to perform cost- benefit analyses of investments in alternative energy and weapons systems technologies. The Army has scheduled a demonstration of this tool in late March 2008. The Army will also sponsor a study that officials expect will lead to the development of a tactical fuel and energy strategy for the future modular force. The contract for the 1-year study was expected to be awarded in 2008. Army officials told us that they plan to update the Army’s energy regulation following completion of the study. The current regulation focuses on facility energy, but according to Army officials, the updated version is expected to include mobility fuel as well. The Navy has established a shipboard energy conservation program and has undertaken other initiatives to save fuel on ships. The energy conservation program has both training and award components to encourage ships to reduce energy consumption. Training materials and activities include a shipboard energy conservation manual, a pocket guide to assist commanders with energy-saving activities, energy audits of ships to show commanders how energy can be saved, and energy conservation seminars and workshops. Awards are given quarterly to ships that use less than the Navy’s established baseline amount of fuel, and fuel savings achieved during the quarter are reallocated to the ship for the purchase of items such as paint, coveralls, and firefighting gear. The ship energy conservation program receives $4 million in funding annually, and Navy officials told us that they achieved $124.6 million in cost avoidance in fiscal year 2006. They said that some other benefits of this program include more available steaming hours, additional training for ships, improved ship performance, reduced ship maintenance, and conservation of resources. The Navy has undertaken other mobility energy reduction efforts as part of its ship energy conservation program, such as ship alterations. Two key ship alterations are the use of stern flaps and the modification of boiler boxes. A stern flap alters the water flow at the stern to reduce a ship’s resistance and increase fuel efficiency. According to Navy officials, preliminary tests of stern flaps on guided missile destroyers showed an annual fuel reduction of 3,800 to 4,700 barrels, or about 6 to 7.5 percent per ship, which DOD estimated would result in potential savings of almost $195,000 per year per ship. Boiler box modifications for amphibious assault ships, one of the Navy’s largest fuel-consuming ships, are expected to decrease the amount of fuel expended by 2 percent per ship. Navy officials told us that this alteration has been approved and that most alterations would be completed in fiscal year 2009. According to Navy officials, once all alterations are completed in fiscal year 2011, this effort could potentially save approximately $30 million per year, depending on the price of fuel. We did not validate these potential savings. In 2005, the Air Force implemented an energy strategy that consists of three components: reducing demand, increasing supply, and changing the culture. At the time of our report, the Air Force was in the process of updating its instructions and directives to reflect its energy strategy and to establish an overarching Air Force energy policy. In addition, the Air Force has identified and begun to implement initiatives aimed at reducing mobility energy demand and increasing fuel efficiency, aligning these initiatives with its energy strategy. Four key initiatives are as follows: Direct routing. This initiative intends to reduce flight time and fuel consumption by flying the most fuel-efficient flight routes and altitudes. Weight reduction. This initiative intends to decrease excess weight on an aircraft without adversely affecting mission capability. Three categories that are being considered are taking unused items off the aircraft, taking fewer of the items that are needed, and looking at mission-critical items that could be designed differently, for example, with lighter materials. According to Air Force officials, every 100 pounds of weight equate to 1.6 million pounds of fuel, or $686,000 per year across its fleet of mobility aircraft. Air refueling optimization. With this initiative, the Air Force intends to change the flight planning process to limit air refueling to only when it is mission essential. Efficient ground operations. This initiative intends to reduce fuel burn during ground operations. Some actions include reducing warm-up time and taxiing on fewer engines. In addition to these demand-reduction initiatives, the Air Force is pursuing efforts to increase supply through the research and testing of new technologies, as well as renewable and sustainable resources. Through the Air Force’s synthetic fuel initiative, jet fuels made from alternative energy sources, such as coal, natural gas, and biomass, are being evaluated for use in military aircraft with the goal of reducing future fuel costs and ensuring fuel availability. The Air Force completed initial testing of a synthetic blend of fuel in the B-52H bomber and certified the use of this fuel blend for this aircraft in August 2007. The service has begun testing on the C-17 cargo aircraft, the B-1 bomber, and the F-22 fighter, with certification expected in 2008. Air Force officials said that they expect the entire fleet to be certified to fly on the synthetic blend of fuel by 2011. However, our prior work has highlighted challenges associated with the development and adoption of alternative energy sources. Finally, the Air Force aims to create a culture that emphasizes energy considerations in all of its operations. Air Force officials told us that this component of their strategy has multiple elements, including focused leadership, training, educational curricula, and communication. The Marine Corps has taken steps to reduce its fuel usage by initiating research and development efforts to develop alternative power sources and improve fuel management. For example, it is testing the use of additional alternators in certain vehicles to provide onboard power capabilities, which could reduce the use of petroleum-based fuel and the number of generators needed on the battlefield. Another initiative involves providing hybrid power—by combining solar panel, generator, and battery energy sources—at remote sites to lessen fuel transportation demands to forward-deployed locations. The Marine Corps expects to begin testing this initiative in October 2008. In addition, the Office of Naval Research is leading efforts for the Marine Corps to develop decision support tools that process and analyze data and improve fuel management in combat. Examples include sensors for fuel containers to measure the amount of remaining fuel and onboard vehicle sensors that automatically generate a requirement when additional fuel is needed. While DOD and the military services have several efforts under way to reduce mobility energy demand, DOD lacks key elements of an overarching organizational framework to guide and oversee these efforts. As a result, DOD cannot be assured that its current efforts will be fully implemented and will significantly reduce its reliance on petroleum-based fuel. While DOD has identified energy as one of its transformational priorities, DOD’s current approach to mobility energy lacks (1) top leadership, with a single executive-level OSD official—supported by an implementation team with dedicated resources and funding—who is accountable for mobility energy matters; (2) a comprehensive strategic plan for mobility energy; and (3) an effective mechanism to provide for communication and coordination of mobility energy efforts among OSD and the military services as well as leadership and accountability over each military service’s efforts. In the absence of a framework for mobility energy that includes these elements, DOD has made limited progress in incorporating fuel efficiency as a consideration in its key business processes—which include developing requirements for and acquiring new weapons systems—and in implementing recommendations made in department-sponsored studies. DOD’s current approach to mobility energy is decentralized, with fuel oversight and management responsibilities diffused among several OSD and military service offices as well as working groups. More specifically, we found its approach lacks key elements of an overarching organizational framework, including a single executive-level OSD official—supported by an implementation team—who is accountable for mobility energy matters, a comprehensive strategic plan, and an effective mechanism for departmentwide communication and coordination. Our prior work on organizational transformations has found such a framework to be critical to successful transformation in both public and private organizations. In addition, it is important to note that DOD has a history of creating organizational frameworks to address other crosscutting issues. DOD’s policies for energy management assign oversight and management responsibilities to several different offices without providing a single focal point with total visibility of, or accountability for, mobility energy reduction efforts across the department. Table 2 outlines various roles and responsibilities for fuel management and oversight. As table 2 shows, DOD policies do not assign responsibility for fuel reduction considerations—either singly or jointly—to any of the various offices involved in fuel management. While DOD directives designate the Under Secretary of Defense for Acquisition, Technology, and Logistics as the department’s senior energy official, with responsibility for establishing policies, granting waivers, and approving changes in the management of energy commodities, including petroleum, the extent to which this official provides comprehensive guidance and oversight of fuel reduction efforts across the department is unclear. Moreover, DOD has charged the Office of the Deputy Under Secretary of Defense (Logistics and Materiel Readiness) to serve as the DOD central administrator for mobility energy policy with overall management responsibility for petroleum and other commodities. We found that although this office plays an active role in maintaining DOD policy on energy supply issues and participates in other department-level fuel-related activities, its primary focus has not been on departmentwide fuel reduction efforts. At the military service level, we found that the Air Force and the Army have established working groups to address fuel reduction and other energy issues. For example, the Air Force has established a senior focus group of high-level Air Force officials to address both mobility and facility energy issues. The senior focus group has created several working groups to address specific energy issues, such as aviation operations, acquisitions and technology, and synthetic fuels, as well as advisory groups on strategic communication, critical infrastructure protection, and financing. The Army also has established an energy working group to facilitate the discussion of energy issues across the service, including how to address rising fuel costs. The group meets each month to share information and identify issues across the Army. At the time of our review, the Army was in the process of establishing a senior steering group of high-level Army officials that would meet to discuss mutual energy concerns. While the Navy and Marine Corps have not established similar formal working groups, officials from both military services told us that they participate in internal meetings on fuel reduction issues. While DOD has begun to increase management attention and has identified energy as a transformational priority, it has not designated a single executive-level OSD official—supported by an implementation team—who is accountable for mobility energy matters across the department. Our prior work has shown that top-level leadership and an implementation team with dedicated resources and funding are key elements of an overarching organizational framework. Furthermore, leadership must set the direction, pace, and tone and provide a clear, consistent rationale that brings everyone together behind a single mission. The Under Secretary of Defense for Acquisition, Technology, and Logistics, as the senior DOD energy official, is responsible for management of energy commodities, but this individual also has a broad range of other responsibilities that include, among other things, matters relating to the DOD acquisition system, research and development, systems engineering, logistics, installation management, and business management modernization. Therefore, this individual’s primary focus has not been on the management of mobility energy efforts. Moreover, from a broader perspective, the extent to which the Under Secretary of Defense for Acquisition, Technology, and Logistics has set a direction for the various OSD and military service offices involved in mobility energy is unclear. In addition, DOD’s Energy Security Task Force was formed in 2006 to address long-term departmental energy security requirements, such as DOD’s reliance on fossil fuels, but we found that the task force has been unable to develop policy or provide guidance and oversight of mobility energy issues across the department. As indicated in its charter, the task force’s integrated product team is required to develop a comprehensive DOD energy strategy and an implementation plan. Among other deliverables, the team’s charter also requires it to define DOD’s energy challenge, create a compendium of energy-related works, and perform a strategic assessment of energy. While the task force has taken steps to identify and monitor the progress of selected mobility energy reduction projects across the department, it has not yet completed an energy strategy or implementation plan, as well as other responsibilities. Furthermore, OSD officials told us that while the task force has briefed the Deputy Secretary of Defense’s advisory group on its recommended projects, it does not have a “seat at the table” in departmental discussions at the Deputy Secretary of Defense level or at other executive levels, such as the Joint Requirements Oversight Council, the Defense Acquisition Boards, or the 3-Star Group within DOD’s Planning, Programming, Budgeting, and Execution process. DOD also does not have an implementation team in place, with dedicated resources and funding, for mobility energy issues. For example, the officials who lead DOD’s Energy Security Task Force’s integrated product team do so as an extra responsibility outside of their normal work duties. Other DOD officials said that the task force provides a good forum for sharing energy ideas across the department, but lacks adequate staff to carry out specific actions. Furthermore, a task force participant told us that it can be difficult to find time to attend meetings while balancing other duties. The task force also does not receive any dedicated funding to pursue department-level energy priorities. Our prior work on the Government Performance and Results Act of 1993 (GPRA) emphasizes the importance of relating funding to performance goals. The establishment of a dedicated funding mechanism for corrosion, for example, enabled DOD to fund high-priority corrosion reduction projects, which resulted in savings of more than $753 million during a 5-year period. Without a long-term funding mechanism, DOD may not be able to ensure that mobility energy reduction efforts receive sustained funding over a period of years. Moreover, DOD may not be well positioned to serve as a focal point on mobility energy within the department, with Congress, and with the Department of Energy or other interagency partners. During a military energy security forum held at the National Defense University in November 2007, representatives from various DOD offices presented energy as an area that is significant to a breadth of issues ranging from force protection to global stability to the security of DOD’s critical infrastructure. They also noted that DOD has the potential to play multiple roles with respect to energy, including consumer, market leader, educator/motivator, oil infrastructure protector, and warfighter supporter. These concerns, coupled with an increased national and congressional interest in reducing fossil fuel dependence and exploring alternative energies, will likely necessitate an increased leadership focus on long-term energy issues, both within DOD and in its role as a stakeholder in interagency and national dialogues. The Energy Independence and Security Act of 2007, for example, requires a variety of national-level actions, including that the President submit to Congress an annual report on the national energy security of the United States. It also requires DOD to examine energy and cost savings in nonbuilding applications, including an examination of savings associated with reducing the need for fuel delivery and logistical support. In addition, the John Warner National Defense Authorization Act for Fiscal Year 2007 directs DOD to improve the fuel efficiency of weapons platforms. DOD has not yet developed a comprehensive strategic plan for mobility energy. Our prior work has found that strategic planning is a key element of an overarching organizational framework. According to GPRA, key elements of a strategic plan include a comprehensive mission statement, goals and objectives, approaches or strategies to achieve those goals and objectives, and methods and timelines for evaluating progress. In addition, we have previously identified other elements that would enhance the usefulness of a strategic plan, including the development of outcome- oriented performance metrics and an alignment of activities, core processes, and resources to support mission-related outcomes. DOD has taken some steps to lay the foundation for mobility energy strategic planning. According to OSD officials, DOD has begun to incorporate mobility energy issues into its Guidance on the Development of the Force, a department-level strategic planning document. In addition, the Office of the Deputy Assistant Secretary of Defense for Policy Planning, within the Office of the Under Secretary of Defense for Policy, is analyzing future energy concerns for the United States and the international security environment and highlighting their implications for the department. DOD officials said that the analysis is expected to provide information for consideration in the development of future strategic planning documents. We also observed that the DOD Energy Security Task Force has begun efforts to define goals that eventually may be incorporated into a DOD energy security strategic plan. OSD officials told us that the task force’s intent is to complete this strategic plan by May 2008. However, current DOD strategic planning documents, such as the National Military Strategy and the most recent Quadrennial Defense Review, do not address mobility energy reduction. Furthermore, until DOD fully develops and implements a comprehensive strategic plan for mobility energy, it cannot be certain that mobility energy reduction efforts align with the department’s energy mission or strategic goals to ensure that they are appropriately prioritized, or know whether critical gaps or duplication of efforts exist. DOD does not have an effective mechanism to facilitate communication and coordination of mobility energy reduction efforts among OSD and the military services. Our prior work has shown that a communication strategy involves creating shared expectations and reporting related progress. While DOD’s Energy Security Task Force aims to identify key players within the energy field, its current structure does not ensure departmentwide communication of fuel reduction efforts, particularly among the military services, which are responsible for most of these efforts. More specifically, during our observation of a task force monthly meeting, we found that although this venue provides for some sharing of information, the generally less than 2 hours allotted for each monthly meeting does not allow for effective coverage of the spectrum of DOD’s mobility energy issues. Moreover, we noted that although the task force’s senior steering group includes, among others, the service under secretaries and assistant secretaries; the Director, Defense Research and Engineering; and several principal deputy under secretaries of defense, it only meets two to three times a year. Furthermore, with the exception of the Air Force, none of the other military service members on the senior steering group have primary responsibility for mobility energy reduction efforts within their services. Without executive-level focal points, the military services may not be well positioned to effectively coordinate on mobility energy reduction efforts across the department or provide leadership or accountability for efforts within their services. In addition, we found a lack of cross-service coordination concerning mobility energy reduction initiatives. Army officials told us that they were unaware of Navy research on fuel reduction metrics, while Air Force officials said that they do not routinely discuss aviation fuel reduction initiatives with their Army counterparts, even though both military services are concerned about aircraft fuel consumption. OSD officials said that while several separate groups are making efforts to reduce fuel consumption, the efforts are often not shared or integrated. Moreover, OSD officials told us that DOD generally lacks incentives to reward the military services for reducing fuel consumption and faces challenges in addressing departmental cultural barriers—such as the traditional view that fuel is simply a commodity and that energy efficiency is not an important consideration for warfighting. Without an effective mechanism to facilitate communication of mobility energy reduction efforts between OSD and the military services, DOD cannot be certain that these efforts are effectively coordinated throughout the department or consistent with DOD’s energy priorities and goals. On a broader level, DOD may not be well positioned to respond to congressional or other agencies’ requests for information on mobility energy. Many OSD, military service, and other DOD officials with whom we spoke expressed the need for an overarching organizational framework to address mobility energy throughout the department. Some officials from OSD suggested that an ideal organizational framework would bring together the various offices within OSD and the military services involved in fuel reduction efforts and establish business practices, analytic methods, and technology investments that take into account strategic risks associated with energy. Some military service officials acknowledged that departmental oversight is needed but told us that they fear such oversight might take resources away from their own mobility energy reduction initiatives. Similarly, some OSD officials said they are concerned that establishing a permanent mobility energy office or similar framework could impose additional bureaucratic layers and slow progress on mobility energy reduction initiatives. We noted that DOD has established new organizational frameworks to address other crosscutting issues, such as business systems modernization, corrosion control and prevention, contractors on the battlefield, and the defeat of improvised explosive devices. While we did not evaluate the strengths or weaknesses of these organizational frameworks as part of this review, they nonetheless provide DOD examples to consider in determining how best to establish an overarching organizational framework for mobility energy. For example, the Business Transformation Agency, which addresses business systems modernization, involves top DOD leadership by operating under the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics but reporting directly to the Deputy Under Secretary of Defense for Business Transformation. DOD has also created a management framework to oversee facility energy, which accounts for about 25 percent of the department’s energy use. Specifically, it has designated a senior agency official, the Deputy Under Secretary of Defense for Installations and Environment, with the responsibilities for meeting federal mandates regarding energy reduction at installations. The department has also created a working group charged with implementing the mandates. In addition, DOD established an Energy Policy Council in 1985 to provide coordinated review of DOD energy policies, issues, systems, and programs. In the instruction outlining the requirements of this council, DOD assigned responsibilities to various departmental offices and designated the then Deputy Assistant Secretary of Defense (Logistics and Materiel Management) as council chair. DOD also called for clearly identified focal points to address energy matters within each military department. When we asked about the status of the council, OSD officials said that they did not believe it still existed. This now-defunct Energy Policy Council could also serve as an example of an organizational framework for mobility energy that provides for sharing of information among the military services. In the absence of an overarching organizational framework, DOD is not well positioned to fully incorporate fuel efficiency considerations into its key business processes or to fully implement recommendations from DOD-sponsored studies on fuel reduction. DOD has not yet fully incorporated fuel efficiency considerations into key departmental business processes, such as its requirements development and acquisition processes for new weapons platforms and other mobile defense systems. DOD’s process to develop requirements, known as the Joint Capabilities Integration and Development System, is a multistep process that involves identifying what military capabilities the department needs to accomplish its tasks. Once the capabilities are identified, DOD’s acquisition process produces equipment that can meet those requirements. DOD-sponsored studies on fuel reduction, such as the 2007 LMI report, note that the requirements development and acquisition processes provide opportunities for DOD to consider energy efficiencies while considering capabilities. Moreover, the 2001 Defense Science Board report noted that fuel efficiency benefits are not currently valued or emphasized in DOD’s requirements development and acquisition processes. While DOD has recently begun to take some steps to integrate fuel considerations into these processes, these considerations are not factored in a systematic manner and cannot be fully applied. For example, DOD’s requirements development process does not systematically include energy efficiency considerations, and the capability gap assessments associated with the process do not include fuel-related logistics, thus leaving these types of issues to be resolved after systems are fielded. As described earlier, in May 2007, the Joint Staff established an energy efficiency key performance parameter that would require fuel considerations during capabilities development. However, because DOD has not developed a methodology to determine how best to employ the energy efficiency key performance parameter, implementation of this key performance parameter remains uncertain. DOD has also taken steps to inform its acquisition process with its pilot program to determine the fully burdened cost of fuel for three mobile defense systems. While the pilot program represents a step toward providing visibility over the total logistics costs associated with delivered fuel and DOD has set a fall 2008 deadline to issue guidance for applying the fully burdened cost of fuel in acquisition programs, DOD has not yet developed an approach for determining how it would incorporate this information into its acquisition decision-making process. Moreover, the 2008 Defense Science Board report presented some concerns about how fully burdened costs are being calculated. Specifically, the report cited a concern that the analysis focused on peacetime costs and did not adequately consider wartime costs, even though the fully burdened cost analysis is intended to be a wartime capability planning factor. Until the pilot program is completed and the results are assessed, DOD is not in a position to apply a fully burdened cost analysis to its acquisition process. Thus, the department is unable to promote greater visibility over its acquisition decisions or more fully consider the operational and cost consequences of the fuel burden on the logistics infrastructure. Other key DOD business processes, such as those that address repair, recapitalization, and replacement of mobile defense systems also present opportunities to incorporate fuel efficiency measures during system upgrades. However, OSD officials told us that the department generally makes decisions about system upgrades without regard to fuel efficiency, including the fully burdened cost, in part because such decisions require greater up-front costs. Although DOD recognizes that by reducing energy demand it can provide its forces greater flexibility and reduce their dependence on the logistics infrastructure, some OSD officials told us that DOD’s budget process promotes a short-term outlook and does not encourage the purchase of fuel-efficient systems or upgrades that may initially cost more but could reduce life cycle and logistics costs over the long term. Moreover, the 2008 Defense Science Board report noted that DOD’s lack of tools to assess the operational and economic benefits of fuel efficiency technologies is a major reason why DOD underinvests in the development and deployment of these technologies. In addition, OSD officials told us that DOD does not systematically assess how making fuel efficiency upgrades to systems would affect other logistics issues—for example, how reducing the weight of an Army vehicle would affect the amount of fuel the Air Force transports to the battlefield for that vehicle. Such assessments, they said, may reveal further enhancements in warfighting capabilities. In the absence of an overarching organizational framework, DOD has made limited progress in implementing recommendations from department-sponsored studies by organizations such as the Defense Science Board, The JASONs, and LMI that have urged an expansion of efforts to reduce dependency on petroleum-based fuel. These studies confirmed that, for many reasons, continued heavy reliance on petroleum- based fuel poses a significant problem for DOD. For example, LMI reported that DOD’s increasing fuel demand furthers the nation’s reliance on foreign energy sources and limits the department’s ability to establish a more mobile and agile force. The studies found a need to focus more DOD management attention on mobility energy matters and recommended actions aimed at, among other things, improving the fuel efficiency of weapons platforms, eliminating institutional barriers that bear upon the department’s decisions regarding fuel efficiency, and developing a long- term mobility energy strategy that would lead to reduced consumption of petroleum-based fuel. DOD has not taken a formal position on these recommendations, and implementation, in some cases, would require significant changes throughout the department that could generate institutional resistance. One study, for example, called for creating a unified energy governance structure in order to alter DOD’s “energy culture.” During our review, we found that DOD had taken some steps toward implementing some of the recommendations, such as initiating a pilot program for determining the fully burdened cost of delivered fuel and adding a requirement for an energy efficiency key performance parameter in its Joint Staff policy manual. However, other recommendations, such as establishing a governance structure for mobility energy, have not been implemented (see app. II for our summary of the recommendations in DOD-sponsored studies and the actions DOD has taken on those recommendations). The 2008 Defense Science Board report noted that the recommendations made by the 2001 Defense Science Board report are still open and remain viable. An overarching organizational framework could better position DOD to address these and other fuel reduction recommendations in a more timely and effective manner. Moreover, a framework for mobility energy could provide greater assurance that DOD’s efforts to reduce its reliance on petroleum-based fuel will succeed without degrading its operational capabilities and that DOD is better positioned to address future mobility energy challenges. DOD continues to face rapidly increasing fuel costs and high fuel requirements that have placed a significant logistics burden on its forces. In light of these and other challenges associated with mobility energy, DOD has begun to increase its management attention on reducing its reliance on petroleum-based fuel. Increased national focus on the United States’ dependence on foreign oil, projected increases in the worldwide demand for oil, and uncertainties about world oil supplies will likely require DOD to further increase its focus on long-term energy issues, both within the department and as a stakeholder in interagency and national dialogues. However, DOD will have difficulty addressing mobility energy challenges in the absence of an overarching organizational framework. Without such a framework, DOD is not well positioned to effectively guide and oversee mobility energy reduction efforts from a departmentwide perspective to ensure that efforts are appropriately prioritized; identify critical gaps or duplication of efforts; and address long-term, large-scale energy issues. In particular, no individual at the executive level within OSD has been designated to be accountable for mobility energy and set the direction, pace, and tone to reduce mobility energy demand across the department. Other elements of an overarching organizational framework include a comprehensive strategic plan and executive-level focal points at the military services to provide for effective coordination. In addition, until DOD takes steps to further incorporate energy efficiency considerations into its business processes, the department is unable to promote greater visibility in its decision making or fully consider the effects of fuel on the logistics infrastructure. With a mobility energy overarching organizational framework in place, DOD would be better positioned to reduce its significant reliance on petroleum-based fuel and to address the energy challenges of the 21st century. To improve DOD’s ability to guide and oversee mobility energy reduction efforts, we recommend that the Secretary of Defense direct the Deputy Secretary of Defense to establish an overarching organizational framework by taking the following three actions: Designate an executive-level OSD official who is accountable for mobility energy matters and sets the direction, pace, and tone to reduce mobility energy demand across the department; improve business processes to incorporate energy efficiency considerations as a factor in DOD decision making; coordinate on energy issues with facility energy officials; act as DOD’s focal point in interagency deliberations about national energy concerns; and lead the department’s potential transition from petroleum- based fuel to alternative fuel sources. This official should be supported by an implementation team with dedicated resources and funding. Direct the executive-level mobility energy official to lead the development and implementation of a comprehensive departmentwide strategic plan for mobility energy. At a minimum, this strategic plan should set forth mobility energy goals and objectives, time frames for implementation, and performance metrics to track and evaluate progress. Ensure that OSD takes the following steps to fully incorporate energy efficiency considerations into DOD’s requirements development and acquisition processes: Develop a methodology to enable the full implementation of an energy efficiency key performance parameter in DOD’s requirements development process. As part of its efforts to complete DOD’s fully burdened cost of fuel pilot program, develop an approach for incorporating this cost information into the acquisition decision making process. Furthermore, to establish effective communication and coordination among the executive-level OSD mobility energy official and the military services, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps to designate an executive-level official within each of their military services to act as a focal point on departmentwide mobility energy efforts as well as provide leadership and accountability over their own efforts. In its written comments on a draft of this report, DOD partially concurred with all of our recommendations. Based on DOD’s comments to our draft report, we made minor modifications to our report, including our first recommendation. Technical comments were provided separately and incorporated as appropriate. The department’s written comments are reprinted in appendix III. In response to our recommendation that the Secretary of Defense direct the Deputy Secretary of Defense to designate an executive-level OSD official who is accountable for mobility energy matters across the department, DOD acknowledged that there is a need to view and manage its energy challenges in a new, more systematic manner. DOD’s response stated that DOD Directive 5134.01 (Dec. 9, 2005) provides the Under Secretary of Defense for Acquisition, Technology, and Logistics oversight and policy-making authority on DOD energy matters. However, it is clear from our review, including discussions with department officials, that neither the Under Secretary nor any official from this office is providing comprehensive oversight and policy guidance for mobility energy across the department. Instead, we found that DOD’s current approach to mobility energy is decentralized, with fuel oversight and management responsibilities diffused among several OSD and military service offices (see table 2 of this report) as well as working groups. DOD does not assign responsibility for fuel reduction considerations—either singly or jointly— to any of the various offices involved in fuel management. DOD’s response stated that its authorities and responsibilities are consistent with those used for overseeing other significant crosscutting issues. However, as we noted in our report, DOD has established new organizational frameworks to address other crosscutting issues, such as business systems modernization, corrosion control and prevention, contractors on the battlefield, and the defeat of improvised explosive devices. Moreover, DOD has established a focal point for facility energy, the Deputy Under Secretary of Defense for Installations and Environment, within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, even though facility energy accounts for about 25 percent of DOD’s total energy consumption. Mobility energy accounts for about three-fourths of its total energy consumption, but there is not an equivalent focal point. Key energy issues—including rising fuel costs, worldwide energy demand, and the high fuel burden during operations— underscore the importance of energy to DOD and will likely require sustained top leadership attention. DOD stated that significant mobility energy efforts are currently under way that will provide for better management of mobility energy. While we acknowledge that DOD has begun to increase management attention on mobility energy issues by creating the DOD Energy Security Task Force, the department does not have an implementation team, with dedicated resources and funding, for mobility energy issues. As we noted in our report, the task force’s current structure does not ensure departmentwide communication of fuel- reduction efforts, particularly among the military services, which are responsible for most of these efforts. Based on DOD’s response to our first recommendation, we made minor modifications to the recommendation to emphasize that DOD should designate an executive-level OSD mobility energy official—supported by an implementation team—who is accountable for mobility energy matters and who sets the direction, pace, and tone to reduce mobility energy demand across the department. This official should also improve business practices to incorporate energy considerations as a factor in DOD decision making; coordinate on energy issues with facility energy officials; act as DOD’s focal point in interagency deliberations about national energy concerns; and lead the department’s potential transition from petroleum-based fuel to alternative fuel sources. Without such an official to provide this leadership, DOD is not well positioned to address mobility energy challenges. In response to our recommendation that the Secretary of Defense direct the Deputy Secretary of Defense to direct the executive-level mobility energy official to lead the development and implementation of a comprehensive departmentwide strategic plan for mobility energy, DOD indicated that the Under Secretary of Defense for Acquisition, Technology, and Logistics is overseeing the development of a DOD energy security strategic plan which will be reported to the Deputy’s Advisory Working Group in May 2008. We believe that this is a step in the right direction. As we noted in this report, until DOD fully develops and implements a comprehensive strategic plan for mobility energy—that sets forth mobility energy goals and objectives, time frames for implementation, and performance metrics to track and evaluate progress—DOD will not be able to ensure that mobility energy reduction efforts align with the department’s energy mission or strategic goals to ensure that they are appropriately prioritized, or to know whether critical gaps or duplication of efforts exist. In response to our recommendation that the Deputy Secretary of Defense ensure that OSD takes steps to fully incorporate energy efficiency considerations into DOD’s requirements development process by developing a methodology to enable the full implementation of an energy efficiency key performance parameter, DOD stated that it plans to address how and when it will implement such a methodology in its forthcoming DOD energy security strategic plan. However, this plan does not yet exist. Because DOD is linking the development of a methodology for an energy efficiency key performance parameter to this plan, the implementation of the key performance parameter remains uncertain. Thus DOD cannot ensure that energy efficiency considerations are factored into its requirements development process in a systematic manner. In addition, in response to our recommendation that DOD develop an approach for incorporating the information from its fully burdened cost of fuel pilot program into its acquisition process, DOD stated that it is developing a plan on how best to assess fuel efficiency relative to the costs and operational capabilities of its weapons systems. Again, until this plan is completed, DOD is not in a position to apply a fully burdened cost analysis to its acquisition process. Thus, the department is unable to promote greater visibility over its acquisition decisions or more fully consider the operational and cost consequences of the fuel burden on the logistics infrastructure. In response to our recommendation that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps to designate an executive-level official within each of their military services to act as a focal point on departmentwide mobility energy efforts as well as provide leadership and accountability over their own efforts, DOD stated that it will address this issue after it has briefed the DOD energy security strategic plan to DOD senior leaders in May 2008. However, as we noted in this report, a lack of cross-service coordination concerning mobility energy reduction initiatives currently exists. By waiting to address this issue, the department cannot be certain that the mobility energy efforts of the military services are consistent with the department’s energy priorities and goals. Designating executive-level military service focal points would provide improved leadership and accountability over their own efforts as well as increased coordination across the department. We are sending copies of this report to the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To address our objectives, we focused our work on the Department of Defense’s (DOD) mobility energy issues related to fuel demand for operations. We did not address supply issues, fuel for nontactical vehicles, or DOD facility energy management, except to briefly describe the organizational structure DOD employs to manage energy issues at its fixed installations. To identify key departmental and military service efforts that have been undertaken to reduce demand for mobility energy, we obtained and reviewed documentation from the Office of the Secretary of Defense (OSD), the Joint Staff, and the military services on their key mobility energy reduction efforts. These documents included briefings, policies, directives, military service studies, and associated paperwork on the specific efforts. We also interviewed cognizant departmental and military service officials who identified and provided the documentation for key efforts. At the department level, we spoke with officials involved with the DOD Energy Security Task Force, including members of the integrated product team and working groups, to obtain information about the task force’s goals, accomplishments, and challenges as well as the specific service mobility energy initiatives it has chosen to monitor. We also interviewed OSD and Joint Staff officials to obtain information on their efforts to incorporate energy efficiency considerations into DOD’s requirements development and acquisition processes. At the military service level, we interviewed officials to determine how each military service is approaching its specific mobility energy reduction efforts, its progress to date, and what challenges it faces in reducing mobility energy demand. We did not validate the cost estimates provided by the services for their initiatives. To obtain a broad perspective of the energy issues, we attended two defense-related conferences that focused on national security energy concerns and their potential implications for DOD. To assess the extent to which DOD has established an overarching organizational framework to guide and oversee mobility energy efforts, we reviewed and analyzed DOD documentation, such as policies and directives, DOD-sponsored fuel-related studies, and legislation, and interviewed officials from OSD, the Joint Staff, and the military services. In doing so, we examined DOD’s key business processes, such as its requirements development and acquisition processes, and determined the extent to which fuel efficiency is systematically considered in these processes. We also identified key elements of an overarching organizational framework based on our prior work and the Government Performance and Results Act of 1993 to determine the extent to which DOD’s current structure incorporated or lacked these key elements. We interviewed officials at OSD and the military services to obtain their perspectives on DOD’s current approach to mobility energy, including the extent to which the DOD Energy Security Task Force is developing policy and providing guidance and oversight of mobility energy issues across DOD. We also attended a meeting of the Energy Security Task Force’s integrated product team to observe the format, content, participants, and dialogue of a typical meeting. In addition, we asked the officials about what benefits and consequences they saw with the existing department- level involvement (or lack thereof) in mobility energy issues. We also identified management frameworks DOD has created to address other crosscutting issues, such as business systems modernization, corrosion control and prevention, contractors on the battlefield, the defeat of improvised explosive devices, and facility energy. We did not evaluate the strengths or weaknesses of these organizational frameworks or their specific applicability to mobility energy. We also reviewed DOD-sponsored studies published since 2000 on reducing fuel demand in DOD’s mobile defense systems, focusing on studies that made recommendations specific to departmentwide mobility energy issues. After an initial literature search and discussions with DOD officials and other researchers, independent of DOD, we ultimately selected four studies to include in our review. We interviewed coauthors from each of these studies to gain a better understanding of their objectives, scopes, and methodologies and their perspectives on the issues covered in their reports as well as other department-level mobility energy concerns. Two team members consolidated the recommendations related to mobility energy from these studies and analyzed them for similarities. They combined those that were similar, rephrased the wording while keeping the intent, and categorized the recommendations into common themes. Through their review of documentation and interviews with DOD officials, they then summarized the actions taken on each of the recommendations. A third team member independently reviewed the results, and discussed any discrepancies with the other team members to reach agreement on the appropriate themes and actions taken. We coordinated our work at the following DOD offices: Office of the Secretary of Defense Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics Systems Engineering and Developmental Test and Evaluation Office of the Director, Defense Research and Engineering Office of the Deputy Under Secretary of Defense, Logistics and Office of the Under Secretary of Defense Comptroller/Chief Financial Office of the Under Secretary of Defense for Policy Office of the Deputy Assistant Secretary of Defense for Policy Planning Office of the Deputy Assistant Secretary of Defense for Forces Director of Program Analysis and Evaluation Chairman, Joint Chiefs of Staff Logistics (J4) Operational Plans and Joint Force Development (J7) Force Structure, Resources, and Assessment (J8) Army Deputy Chief of Staff (G4) Assistant Secretary of the Army for Acquisition, Logistics, and Technology U.S. Army Combined Arms Support Command Army Rapid Equipping Force Office of the Chief of Naval Operations Naval Sea Systems Command Office of Naval Research Headquarters, Marine Corps Department of the Air Force Office of the Deputy Assistant Secretary of the Air Force for Environment, Logistics, Installations and Mission Support (A4/7) Strategic Plans and Programs (A8) Conduct Air, Space, and Cyber Operations United States Joint Forces Command Defense Logistics Agency/Defense Energy Support Center We conducted our review from September 2007 through March 2008 in accordance with generally accepted government auditing standards. Over the past 7 years, DOD has commissioned several studies to explore ways to reduce its fuel consumption. We reviewed recommendations applicable to mobility energy in the following three DOD-sponsored studies: Defense Science Board, More Capable Warfighting Through Reduced Fuel The JASONs/The MITRE Corporation, Reducing DOD Fossil-Fuel LMI, Transforming the Way DOD Looks at Energy: An Approach to Establishing an Energy Strategy, April 2007 We also reviewed the recommendations from the 2008 Defense Science Board report on DOD’s energy strategy. However, we did not include those recommendations in our analysis because the report was issued in February 2008, and the department could not be expected to have taken action on the recommendations at the time we issued this report. We summarized the recommendations, grouped them into common topics, and obtained information on DOD actions taken on each of them. Table 3 presents a summary of our analysis. In addition to the contact named above, Thomas Gosling, Assistant Director; Karyn Angulo; Alissa Czyz; and Marie Mak made major contributions to this report.
The Department of Defense (DOD) relies heavily on petroleum-based fuel for mobility energy--the energy required for moving and sustaining its forces and weapons platforms for military operations. Dependence on foreign oil, projected increases in worldwide demand, and rising oil costs, as well as the significant logistics burden associated with moving fuel on the battlefield, will likely require DOD to address its mobility energy demand. GAO was asked to (1) identify key efforts under way to reduce mobility energy demand and (2) assess the extent to which DOD has established an overarching organizational framework to guide and oversee these efforts. GAO reviewed DOD documents, policies, and studies, and interviewed agency officials. OSD, the Joint Staff, and the military services have undertaken efforts to reduce mobility energy demand in weapons platforms and other mobile defense systems. For example, OSD created a departmentwide Energy Security Task Force in 2006 that is monitoring the progress of selected energy related research and development projects. The Joint Staff updated its policy governing the development of capability requirements for new weapons systems to selectively consider energy efficiency as a key performance parameter--a characteristic of a system that is considered critical to the development of an effective military capability. The Army is addressing fuel consumption at forward-deployed locations by developing foam-insulated tents and temporary dome structures that are more efficient to heat and cool, reducing the demand for fuel-powered generators. The Navy has established an energy conservation program to encourage ships to reduce energy consumption. The Air Force has developed an energy strategy and undertaken initiatives to determine fuel-efficient flight routes, reduce the weight on aircraft, optimize air refueling, and improve the efficiency of ground operations. The Marine Corps has initiated research and development efforts to develop alternative power sources and improve fuel management. While these and other efforts are under way and DOD has identified energy as one of its transformational priorities, DOD lacks elements of an overarching organizational framework to guide and oversee mobility energy reduction efforts. In the absence of an overarching organizational framework for mobility energy, DOD cannot be assured that its current efforts will be fully implemented and will significantly reduce its reliance on petroleum-based fuel. GAO found that DOD's current approach to mobility energy lacks (1) a single executive-level OSD official who is accountable for mobility energy matters; sets the direction, pace, and tone to reduce mobility energy demand across DOD; and can serve as a mobility energy focal point within the department and with Congress and interagency partners; (2) a comprehensive strategic plan for mobility energy that aligns individual efforts with DOD-wide goals and priorities, establishes time frames for implementation, and uses performance metrics to evaluate progress; and (3) an effective mechanism to provide for communication and coordination of mobility energy efforts among OSD and the military services as well as leadership and accountability over each military service's efforts. GAO also found that DOD has made limited progress in incorporating fuel efficiency as a consideration in its key business processes--which include developing requirements for and acquiring new weapons systems. DOD has established new organizational frameworks to address other crosscutting issues, such as business systems modernization and corrosion control and prevention. Establishing an overarching organizational framework for mobility energy could provide greater assurance that DOD's efforts to reduce its reliance on petroleum-based fuel will succeed and that DOD is better positioned to address future mobility energy challenges--both within the department and as a stakeholder in national energy security dialogues.
To assist New York in recovering from the September 11, 2001, terrorist attacks, Congress passed Public Law 107-147, the Job Creation and Worker Assistance Act of 2002. The act was signed into law on March 9, 2002, and created seven tax benefits that focus on the New York Liberty Zone. The Liberty Zone tax benefits include treating employees in the Liberty Zone as a targeted group for purposes of the work opportunity tax credit (WOTC), which IRS refers to as the business employee credit; a special depreciation allowance; an increase in section 179 expensing; special treatment of leasehold improvement property; an extension of the replacement period for involuntarily converted authority to issue tax-exempt private activity bonds; and authority to issue advance refunding bonds. An explanation of each benefit, an example of how it can be used, and the period each benefit is in effect are included in appendix II. Under the Congressional Budget Act of 1974 as amended, JCT provides estimates of the revenue consequences of tax legislation. In March 2002, JCT estimated that the New York Liberty Zone tax benefits would reduce federal revenues by $5.029 billion over the period 2002 through 2012. For one of the seven Liberty Zone tax benefits, the business employee credit, IRS is collecting but not planning to report some information about use—the number of taxpayers claiming the credit and the amount of credit claimed—nor is it planning to use this information to report on how the benefit has reduced taxpayers’ tax liabilities. IRS is not planning to collect or report information about the use of the other six benefits or how using these benefits has reduced taxpayers’ tax liabilities. IRS collects information on how many taxpayers use the business employee credit and the amount of the credit claimed on Form 8884 (New York Liberty Zone Business Employee Credit). Submission processing officials in the Small Business/Self-Employed (SB/SE) Division began entering information from this form into IRS’s computer system in January 2003. Some taxpayers claiming the business employee credit may have their returns processed by the Wage and Investment (W&I) Division, which is not planning to enter information from the form into the computer system. However, IRS officials said that the bulk of the taxpayers who would claim this credit would submit their returns to the SB/SE Division. IRS can collect information on the use of the business employee credit because it developed a new form to administer this credit. Although the business employee credit was included in the WOTC provisions, IRS officials said they needed to track business employee credits separately because the business employee credit can be used to offset any alternative minimum taxes owed but the general WOTC provisions cannot. IRS currently cannot collect information on the remaining six Liberty Zone benefits because it is using existing forms to administer them, and taxpayers do not report these six benefits as separate items on their returns. For example, taxpayers add the amount of depreciation they are allowed under the Liberty Zone special depreciation allowance benefit to other depreciation expenses and report their total depreciation expenses on their returns. Since taxpayers do not report their use of six of the seven benefits separately on their returns, IRS cannot report on how extensively these six benefits were used. IRS officials said that although they are collecting information on the amount of business employee credits claimed by taxpayers, they are not planning on reporting information on the extent to which the benefit reduced taxpayers’ tax liabilities. For the other six benefits, IRS officials said that without information about use, they cannot collect or report on the extent to which the benefits reduced taxpayers’ tax liabilities. According to IRS officials, the agency followed its usual procedures in determining the type of information to collect about the Liberty Zone tax benefits. They added that IRS would collect and report information that would help it to administer the tax laws or if it was legislatively mandated to collect or report information. IRS officials said they do not need information about the use of the Liberty Zone tax benefits or the resulting reductions in taxpayers’ tax liabilities in order to administer the tax laws. For example, IRS officials said that they do not need information on each specific benefit claimed to properly target their enforcement efforts. Instead, they target their enforcement efforts based on taxpayers claiming various credits, deductions, and so forth that fall outside of expected amounts. In addition, IRS officials noted that the agency has not been legislatively mandated to collect or report information on the benefits. IRS would need to make several changes if it were to collect more information on taxpayers’ use of the benefits and their effect on reducing taxpayers’ tax liabilities. IRS would need to change forms used to collect information from taxpayers, change how it processes information from tax returns, and revise computer programming, which would add to taxpayer burden and IRS’s workload. Even if it were to make these changes, IRS would not have information for two of the years the benefits were available. Also, although the additional information would enable IRS to make an estimate of the revenue loss due to the benefits, it would not be able to produce a verifiable measure of the loss. To produce the estimate, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits. For six of seven of the Liberty Zone tax benefits, IRS would need to revise forms, tax return processing procedures, and computer programming if it were to collect and report information about the number of taxpayers claiming the benefit and the amount they claimed. It would also need to take most of these steps to report on the use of the seventh benefit—the business employee credit. According to IRS officials, they would need to make staff available to revise forms, review returns for completeness and accuracy, transcribe the additional data, and write the necessary computer programs for entering and extracting data. They would also need to allocate computer resources to process the additional information collected and prepare reports on the use of the benefits. For example, for the special depreciation allowance benefit, IRS would need to revise Form 4562 (Depreciation and Amortization) so that taxpayers reported the amount of depreciation they claimed specifically due to this benefit, tax return processing procedures so that processing staff reviewed Form 4562 for completeness and accuracy and transcribed information about the special depreciation allowance, and computer programming so that information about the special depreciation allowance could be entered into IRS’s information systems and extracted in order to prepare reports about the use of the benefit. For the seventh benefit—the business employee credit—taxpayers already separately report the amount of the credit they are claiming, and IRS is already reviewing these forms for accuracy and completeness, transcribing data from them, and entering this information into the agency’s computer system for those returns that are processed by the SB/SE Division. However, computer programming would need to be changed to extract information to prepare reports about benefit use. For any returns processed by the W&I Division, IRS would also need to revise W&I processing procedures and computer programming. Since IRS currently does not have any plans to make these changes, officials were unable to estimate the costs involved in accomplishing these actions or the number of staff needed to do so. However, IRS officials estimated they added one full-time equivalent (FTE) primarily to review the Form 8884s for completeness and accuracy and for data transcription—part of the process to collect information about the use of the business employee credit. If IRS collected information about the use of the benefits, IRS could then develop some information on the reduction in taxpayers’ tax liabilities due to the benefits. For example, IRS could determine how much lower each taxpayers’ tax liability is due to the use of the tax benefits, assuming that taxpayer behavior would be the same whether the benefits existed or not. Table 1 is an example of such a computation for claiming the Liberty Zone Section 179 expensing benefit. In this example, a taxpayer with $100,000 in income bought $40,000 worth of office equipment in 2002 and placed this equipment in service in the Liberty Zone in 2002. After applying the Liberty Zone section 179 expensing benefit, taxable income would be $60,000. Since the equipment has been completely expensed, the taxpayer cannot claim any further deductions for this equipment. To recalculate the taxpayer’s taxable income as if the special Liberty Zone expensing benefit did not exist, IRS could assume that the taxpayer would make the same investment, even without the Liberty Zone tax benefit, and still claim the $24,000 section 179 deduction available to all taxpayers in 2002 and any other available deductions, such as the special depreciation allowance. In our example, the special depreciation allowance would be worth $4,800, and the amount otherwise available as a depreciation deduction (regular depreciation) would be worth $1,600, which would reduce the taxpayer’s taxable income to $69,600. The total reduction in taxable income would be $9,600. Once all the adjustments to taxable income were made, IRS would then need to apply the appropriate marginal tax rate to arrive at the taxpayer’s recalculated tax liability. If IRS were to begin collecting information on the number of taxpayers using the Liberty Zone tax benefits and the amounts they claimed, the information would not be complete. In addition, although the information would enable IRS to make an estimate of the revenue loss due to the benefits, the information would not result in a verifiable measure of the loss. To produce the estimate, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits. IRS said the earliest it would be able to collect information on the number of taxpayers using the benefits and the amounts each claimed would be for tax year 2004 returns, which IRS would not process until calendar year 2005. As a result, IRS would not have information for two of the years that the benefits were in effect, which is significant because most of the benefits expire by the end of 2006. IRS could not reconstruct information on tax liability for those 2 years because returns already filed would not indicate whether taxpayers used the Liberty Zone benefits and would not show the amount claimed through benefit use. Although IRS could ask for information about past benefit use since taxpayers are instructed to keep tax records for 3 years, this would require taxpayers to provide additional information and increase taxpayer burden. Also, it would be difficult for IRS to use current year information to estimate the amount claimed through benefit use retroactively because the pattern of using the benefits could have changed over time. In addition to not being complete, the data that IRS could collect on the number of taxpayers using the Liberty Zone benefits and the amounts each claimed would not be sufficient for actually measuring how much revenue those benefits cost the federal government. The reduction in revenues due to the Liberty Zone tax benefits is equal to the difference between the amount of revenue that the federal government would collect with the benefits in place and the amount it would collect in the absence of those benefits. There are two reasons why revenues would be different with and without the benefits. First, the rules for computing tax liabilities are different in the two cases (as shown in table 1). Second, the behavior of many taxpayers is likely to be different in the two cases. In fact, a primary purpose of the tax benefits is to influence taxpayer behavior. For example, in the case of the Liberty Zone section 179 benefit, some taxpayers who claim this benefit would have made different investment decisions if that particular benefit were not available. In our simplified example shown in table 1, this difference in behavior might be that the taxpayer invested less than $40,000 in office equipment—perhaps even nothing—because the Liberty Zone benefit did not exist. As a consequence, the taxpayer’s taxable income would have been different than the $69,600 shown in table 1. Given that IRS cannot know what taxpayers would have done in the absence of the benefits, the best it could do is estimate revenue losses based on assumptions about that alternative behavior. The Commissioner of Internal Revenue was provided a draft of this report for his review and comment. The IRS Director of Tax Administration Coordination agreed with the contents of the report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its date. At that time, we will send copies to the Chairman and Ranking Minority Member of the Senate Committee on Finance; the Chairman of the House Committee on Ways and Means and the Chairman and Ranking Minority Member of its Subcommittee on Oversight; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Jonda Van Pelt, Assistant Director. If you have any questions regarding this report, please contact her at (415) 904-2186 or vanpeltj@gao.gov or me at (202) 512-9110 or brostekm@gao.gov. Key contributors to this report were Evan Gilman, Edward Nannenhorn, Lynne Schoenauer, Shellee Soliday, Anne Stevens, and James Wozny. Our first objective was to determine the extent to which the Internal Revenue Service (IRS) is collecting and reporting information about the use and value of the seven Liberty Zone tax benefits. We defined use as the number of taxpayers who claimed each benefit and the amount each claimed. In analyzing value, we examined what information IRS could provide about reductions in taxpayers’ tax liabilities when they used the Liberty Zone tax benefits, and then examined whether this information could be used to measure the actual reduction in federal tax revenues. To address the first objective, we interviewed IRS officials from Legal Counsel, the Wage and Investment (W&I) Division’s and the Small Business/Self-Employed (SB/SE) Division’s submission processing groups, Statistics of Income (SOI), Forms and Publications, and the Tax Exempt Government Entities (TEGE) Division to determine if they were collecting and reporting any information about the use of the Liberty Zone tax benefits and how the benefits reduced taxpayers’ tax liabilities. We analyzed the documents they provided about collecting and reporting on the use of the benefits and the reduction in taxpayers’ tax liabilities. We also analyzed the data the Joint Committee on Taxation (JCT) provided about its estimate of the reduction in federal tax revenues. Finally, we interviewed New York city and state officials to determine if they were collecting and reporting information on the benefits. Our second objective was to determine what steps IRS would need to take and the resources it would need to collect and report information on the use and value of the Liberty Zone tax benefits if it is not already doing so. We used the same definition of use and value as we used for the first objective. To address the second objective, we interviewed IRS officials from Legal Counsel, the W&I Division’s and the SB/SE Division’s submission processing groups, SOI, Forms and Publications, and the TEGE Division to determine what steps they would need to take and the resources they would need to collect and report information on the use of the Liberty Zone tax benefits and the reduction in taxpayers’ tax liabilities if they used the benefits. We also analyzed IRS documents related to the steps that would need to be taken to collect and report on the use of the benefits and on the reduction in taxpayers’ tax liabilities. We performed our work from April 2003 through August 2003 in accordance with generally accepted government auditing standards. The work opportunity tax credit (WOTC) was expanded to include a new targeted group for employees who perform substantially all their services for a business in the Liberty Zone or for a business that relocated from the Liberty Zone June 1, 2002, to October 31, 2002 and 2003 elsewhere within New York City due to the physical destruction or damage of their workplaces by the September 11, 2001, terrorist attacks. The New York Liberty Zone business employee credit allows eligible businesses with an average of 200 or fewer employees to take a maximum credit of 40 percent of the first $6,000 in wages paid or incurred for work performed by each qualified employee during calendar years 2002 and 2003. Unlike the other targeted groups under WOTC, the credit for the new group is available for wages paid to both new hires and existing employees. 2002, and receives $3,000 in wages a month. The company can claim a credit for 40 percent of the first $6,000 in wages paid ($2,400). The special depreciation allowance provides an additional deduction for eligible properties. Eligible Liberty Zone properties include new tangible property (e.g., new office equipment), used tangible property (e.g., used office equipment), and residential rental property (e.g., an apartment complex) and nonresidential real property (e.g., an office building) if it rehabilitates real property damaged or replaces real property destroyed or condemned as a result of the September 11, 2001, terrorist attacks. On December 1, 2002, a real estate development firm purchases an office building in the New York Liberty Zone that costs $10 million and places it in service on June 1, 2003. The building replaces real property damaged as a result of the September 11, 2001, terrorist attacks. Under the provision, the taxpayer is allowed an additional first- year depreciation deduction of 30 percent ($3 million). For property inside the Liberty Zone, the special depreciation allowance allows taxpayers to deduct 30 percent of the adjusted basis of qualified property acquired by purchase after September 10, 2001, and placed in service on or before December 31, 2006 (December 31, 2009, in the case of nonresidential real property and residential rental property). For property outside the Liberty Zone, a special depreciation allowance is available for taxpayers but only with regard to qualified property—such as new tangible property and non-Liberty Zone leasehold improvement property—that is acquired after September 10, 2001, and before September 11, 2004, and is placed in service on or before December 31, 2004. However, recent legislation (the Jobs and Growth Tax Relief Reconciliation Act of 2003, Pub. L. No. 108-27) has increased the deduction to 50 percent for qualified property both within and outside the Liberty Zone that is acquired after May 5, 2003, and placed in service on or before December 31, 2004. Taxpayers with a sufficiently small investment in qualified section 179 business property in the Liberty Zone can elect to deduct rather than capitalize the amount of their investment and are eligible for an increased amount over other taxpayers. For qualified Liberty Zone property placed in service during 2001 and 2002, under section 179 taxpayers could deduct up to $59,000 ($24,000 under the general provision plus an additional $35,000) of the cost. The investment limit (phase-out range) in the property was $200,000. For qualified Liberty Zone property placed in service after 2002 and before 2007, taxpayers could deduct $60,000 ($25,000 under the general provision plus the additional $35,000) of the cost. In 2002, a taxpayer purchases and places in service in his or her Liberty Zone business several qualified items of equipment costing a total of $260,000. Because 50 percent of the cost of the property ($130,000) is less than $200,000, the investment limit, the section 179 deduction of $59,000 is not reduced, and the taxpayer can deduct this amount. However, recent legislation (Pub. L. No. 108-27) has further increased the maximum deduction for qualified Liberty Zone property placed in service after 2002 and before 2006 to $135,000 and has increased the investment limit to $400,000. For 2006, the maximum section 179 deduction allowed for qualified Liberty Zone property returns to $60,000 and the investment limit is $200,000. To calculate the available expensing treatment deduction amount for qualified Liberty Zone property, every dollar for which 50 percent of the cost of the property exceeds the investment limit is subtracted from the maximum deduction allowed. Taxpayers outside of the Liberty Zone may also expense qualified property under section 179. However, the maximum deduction for non-Liberty Zone property is $35,000 less than the maximum deduction allowed for Liberty Zone property. The investment limits for Liberty Zone and non-Liberty Zone property are similar. However, in contrast, in calculating the available expensing treatment deduction amount for non-Liberty Zone properties, every dollar invested in the property that exceeds the investment limit is subtracted from the maximum deduction allowed. Qualified Liberty Zone leasehold improvement property can be depreciated over a 5-year period using the straight-line method of depreciation. The term “qualified Liberty Zone leasehold property” means property as defined in section 168(k)(3) and may include items such as additional walls and plumbing and electrical improvements made to an interior portion of a building that is nonresidential real property. Qualified Liberty Zone leasehold improvements must be placed in service in a nonresidential building that is cost of the property. located in the Liberty Zone after September 10, 2001, and on or before December 31, 2006. The class life for qualified New York Liberty Zone leasehold improvement property is 9 years for purposes of the alternative depreciation system. Taxpayers can also depreciate leasehold improvements outside of the Liberty Zone. These taxpayers can depreciate an addition or improvement to leased nonresidential real property using the straight-line method of depreciation over 39 years. Qualified leasehold improvement properties outside the Liberty Zone can qualify for both the 39-year depreciation deduction and the special depreciation allowance. However, leasehold improvements inside the Liberty Zone do not qualify for the special depreciation allowance. A taxpayer may elect not to recognize gain with respect to property that is involuntarily converted if the taxpayer acquires qualified replacement property within an applicable Zone business, but it was period. The replacement period for property that was destroyed in the involuntarily converted in the Liberty Zone as a result of the September 11, 2001, September 11, 2001, terrorist attacks is 5 years after the end of the taxable year in which a gain is realized provided that substantially all of the use of the replacement property is in New York City. The involuntarily converted Liberty Zone property can be replaced with any tangible property held for productive use in a trade or business because taxpayers in presidentially declared disaster areas such as the Liberty Zone can use any tangible, productive use property to replace property that was involuntarily converted. Outside of the Liberty Zone, the replacement period for involuntarily converted property is 2 years (3 years if the converted property is real property held for the productive use in a trade or business or for investment), and the converted property must be replaced with replacement property that is similar in service or use. terrorist attacks. Several years ago, the taxpayer paid $50,000 for the truck and, over time, depreciated the basis in the truck to $30,000. If the insurance company paid $35,000 in reimbursement for the truck and the taxpayer used the $35,000 to purchase replacement property of any type that is held for productive use in a trade or business within 5 years after the close of the tax year of payment by the insurance company, the taxpayer would not recognize a gain. An aggregate of $8 billion of tax-exempt private activity bonds, called qualified New York Liberty bonds, are authorized to finance the acquisition, construction, reconstruction, and renovation of certain property that is primarily located in the Liberty Zone. Qualified New York Liberty bonds must finance nonresidential real property, residential rental property, or public utility property and must also satisfy certain other requirements. The Mayor of New York City and the Governor of New York State may each designate up to $4 billion in qualified New York Liberty bonds. The Mayor of New York City Effective for bonds designates $120 million of qualified New York Liberty bonds to finance the construction of an office building in the Liberty Zone. Assistance Act of issued after March 9, 2002 (the date of enactment of the Job Creation and Worker 2002), and on or before December 31, 2004 Advance refunding bonds be issued to pay principal, interest, or redemption price on State designates $70 million refunding bonds issued An aggregate of $9 billion of advance refunding bonds may The Governor of New York Effective for advance certain prior issues of bonds issued for facilities located in New York City (and certain water facilities located outside of to refinance bonds that New York City). Under this benefit, certain qualified bonds, which were outstanding on September 11, 2001, and had exhausted existing advance refunding authority before September 12, 2001, are eligible for one additional advance refunding. The Mayor of New York City and the Governor of New York State may each designate up to $4.5 billion in advance refunding bonds. of advance refunding bonds after March 9, 2002, financed the construction of December 31, 2004 hospital facilities in New York City. The Liberty Zone tax benefits were enacted as part of the Job Creation and Worker Assistance Act of 2002, Pub. L. No. 107-147.
The President pledged a minimum of $20 billion in assistance to New York for response and recovery efforts after the September 11, 2001, terrorist attacks. This includes tax benefits, commonly referred to as the Liberty Zone tax benefits, that the Joint Committee on Taxation (JCT) estimated would reduce federal tax revenues by about $5 billion. The actual amount of benefits realized, however, will depend on the extent to which taxpayers and the city and state of New York take advantage of them. GAO was asked to determine (1) the extent to which the Internal Revenue Service (IRS) is collecting and reporting information about the number of taxpayers using each of the seven Liberty Zone tax benefits and the revenue loss associated with those benefits and (2) if IRS is not collecting and reporting this information, what steps it would need to take and what resources would be needed to do so. For one of the seven Liberty Zone tax benefits, the business employee credit, IRS is collecting but not planning to report some information about use--the number of taxpayers claiming the credit and the amount of credit claimed--nor is it planning to use this information to report the revenue loss associated with that benefit. IRS is not planning to collect or report information about the use of the other six benefits or the revenue loss associated with those benefits. According to IRS officials, the agency followed its usual procedures in determining whether to collect information about benefit use and revenue loss. IRS officials said they would collect and report these data if (1) it would help the agency administer the tax laws or (2) IRS was legislatively mandated to do so. IRS would need to make several changes if it were to collect more information on the use of the benefits and the associated revenue loss, and this information would not be complete or lead to a verifiable measure of the reduction in federal tax revenues due to the benefits. IRS would need to change forms, processing procedures, and computer programming, which would add to taxpayer burden and IRS's workload. IRS officials were unable to estimate the costs involved in accomplishing these actions or the number of staff needed to do so. The officials said that the earliest they could make these changes would be for tax year 2004 returns. As a result, IRS would not have information for two of the years that the benefits were in effect, which is significant because most of the benefits expire by the end of 2006. In addition, if IRS were to collect data on the use of the Liberty Zone benefits, it would be able to make an estimate, but could not produce a verifiable measure, of the revenue loss due to the benefits because, for example, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits.
Safeguarding federal computer systems and the systems supporting the nation’s critical infrastructures is essential to protecting national and economic security, and public health and safety. For government organizations, information security is also a key element in maintaining the public trust. Inadequately protected systems may be vulnerable to insider threats as well as the risk of intrusion by individuals or groups with malicious intent who could use their illegitimate access to obtain sensitive information, disrupt operations, or launch attacks against other computer systems and networks. Our previous reports, and those of agency inspectors general, describe persistent information security weaknesses that place a variety of federal operations at risk of disruption, fraud, and inappropriate disclosure. The emergence of increasingly sophisticated cyber threats underscores the need to manage and bolster the security of federal information systems. For example, advanced persistent threats—where an adversary that possesses sophisticated levels of expertise and significant resources can attack using multiple means such as cyber, physical, or deception to achieve its objectives—pose increasing risks. In addition, the number and types of cyber threats are on the rise. The attack on federal personnel and background investigation files that breached the PII of more than 20 million federal employees and contractors illustrates the need for strong security over information and systems. Further, in February 2015, the Director of National Intelligence testified that cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication, and severity of impact. FISMA establishes information security program and evaluation requirements for federal agencies in the executive branch. To help protect against threats to federal systems, FISMA requires each agency to develop, document, and implement an agency-wide information security program to provide security for the information and information systems that support its operations and assets, including those provided or managed by another agency, contractor, or another organization on its behalf. FISMA also states that the agency head is to delegate authority to ensure compliance with the law to the CIO, who in turn is to designate a senior agency information security officer to carry out the CIO’s responsibilities under the law. In most federal organizations, this official is referred to as the CISO. FISMA also assigns responsibilities to OMB, the Department of Homeland Security (DHS), NIST, and agency inspectors general: OMB’s responsibilities include, among other things, developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies except with regard to national security systems. Since 2003, OMB has issued requirements and guidance to agencies on many information security issues, such as an initiative to consolidate and secure agencies’ connections to the Internet; the security of cloud computing; privacy and the protection of PII; and continuous monitoring of security controls in federal information systems. Additionally, OMB has issued annual instructions for agencies and inspectors general to meet requirements for reporting on the effectiveness of agency security programs. DHS’s responsibilities under FISMA 2014 include, among other things, developing, issuing, and overseeing implementation of binding operational directives to agencies, including directives for incident reporting, contents of annual agency reports, and other operational requirements. DHS issued the first binding operational directive under its FISMA 2014 authorities in May 2015, mandating that federal agencies mitigate all critical vulnerabilities in Internet-accessible systems within 30 days. NIST’s chief responsibility under FISMA is to develop security standards and guidelines for agencies. In accordance with its statutory responsibilities, NIST has developed a risk management framework of standards and guidelines for agencies to follow in developing and implementing information security programs. Each agency inspector general, or other independent auditor, is required to annually evaluate and report on the information security program and practices of the agency. In September 2015, we reported that, according to agency inspectors general, the extent of agencies’ implementation of requirements for establishing and maintaining an information security program was mixed. We noted that our work and reviews by inspectors general had highlighted information security control deficiencies at agencies that exposed information and information systems supporting federal operations and assets to elevated risk of unauthorized use, disclosure, modification, and disruption. Additionally, OMB Circular A-130 requires that agency information security and privacy programs provide for agency information security and privacy policies, planning, budgeting, management, implementation, and oversight; and cost-effectively manage information security and privacy risks, including reducing such risks to an acceptable level. It also requires agencies to implement a risk management framework to guide and inform (1) the categorization of federal information and information systems, (2) the selection, implementation, and assessment of security and privacy controls, (3) the authorization of information systems and common controls, and (4) the continuous monitoring of information systems. Additionally, the circular requires agencies to ensure that the CIO designates a senior agency information security officer to develop and maintain an agency-wide information security program in accordance with FISMA 2014. FISMA states that each agency head is responsible for securing agency information and information systems, including by delegating to the agency CIO the authority to ensure compliance with the law’s requirements. The CIO, in turn, is directed to designate a CISO to carry out the CIO’s responsibilities. Those responsibilities include ensuring the development, documentation, and implementation of the agency-wide information security program. We found that most agencies had defined the role of the CISO in ensuring that most security program activities were developed, documented, or implemented in their policies. However, 14 agencies had not defined the CISO’s role for all required activities, potentially limiting these officials’ ability to effectively oversee these agencies’ information security programs. In particular, for several components of their information security programs, these agencies either assigned these responsibilities to other officials within the agency or did not document the role their CISO played. Without fully defining this role for all of the elements of their information security programs, agencies are not positioning their CISOs to most effectively carry out their responsibilities for ensuring compliance with federal information security requirements and effectively manage risks to the their operations. Under FISMA, the agency CISO is to carry out the CIO’s responsibilities for ensuring agency compliance with the law, including development, documentation, and implementation of the agency-wide information security program that includes the following eight components: Periodic risk assessments: FISMA requires agencies to conduct periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems. These risk assessments help determine whether controls are in place to remediate or mitigate risk to the agency. According to NIST guidance, risks are addressed from an organizational perspective with the development of, among other things, risk management policies, procedures, and strategy. The risk decisions made at the organizational level are to guide the entire risk management program. At the information system level, risk management activities include categorizing organizational information systems, allocating security controls to organizational information systems, and managing the selection, implementation, assessment, authorization, and ongoing monitoring of security controls. Policies and procedures: Agencies are required to develop, document, and implement policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements. Security plans: Information security programs are required to include plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. According to NIST, the purpose of a system security plan is to provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements. In addition, NIST recommends that the plan be reviewed and updated at least annually. Security awareness training: FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support the operations and assets of the agency. Training is intended to inform agency personnel of the information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. Periodic testing: Federal agencies are required to periodically test and evaluate the effectiveness of their information security policies, procedures, and practices as part of implementing an agency-wide security program. This testing is to be performed with a frequency depending on risk, but no less than annually. Testing should include management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems. This type of oversight is a fundamental element that demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results are used to improve security. Remedial actions: FISMA requires agencies to plan, implement, evaluate, and document remedial actions to address any deficiencies in their information security policies, procedures, and practices. In addition, NIST guidance states that federal agencies should develop a plan of action and milestones (POA&M) for information systems to document the agency’s planned remedial actions to correct weaknesses or deficiencies noted during the assessment of the security controls and to reduce or eliminate known vulnerabilities in the system. Furthermore, the POA&M should identify, among other things, the resources required to accomplish the tasks and scheduled completion dates for the milestones. According to OMB, remediation plans assist agencies in identifying, assessing, prioritizing, and monitoring the progress of corrective efforts for security weaknesses found in programs and systems. Incident response: FISMA requires that agency security programs include procedures for detecting, reporting, and responding to security incidents and that agencies report incidents to the United States Computer Emergency Readiness Team. According to NIST, incident response capabilities are necessary for rapidly detecting an incident, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring computing services. Contingency planning: FISMA requires federal agencies to implement plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. According to NIST, contingency planning is part of overall information system continuity of operations planning, which fits into a much broader security and emergency management effort that includes, among other things, organizational and business process continuity and disaster recovery planning. These plans and procedures are essential steps in ensuring that agencies are adequately prepared to cope with the loss of operational capabilities due to a service disruption such as an act of nature, fire, accident, or sabotage. According to NIST, these plans should cover all key functions, including assessing an agency’s information technology (IT) and identifying resources, minimizing potential damage and interruption, developing and documenting the plan, and testing it and making the necessary adjustments. Other important requirements of FISMA that are required to be carried out by the CISO include the following: Specialized security training: Agencies are required to train and oversee personnel who have significant information security responsibilities. According to NIST, a needs assessment is crucial to identify the individuals with significant IT security responsibilities, assess their functions, and identify their training needs. Training material should be developed that provides the skill sets necessary for attendees to accomplish the security responsibilities associated with their jobs. Examples of positions that would typically require specialized training include system administrators, system owners, security program managers, and senior agency leaders. Contractor system security oversight: Under FISMA, agency information security programs are to provide security for the information and systems supporting the operations and assets of the agency, including systems provided or managed by contractors. In addition, OMB’s annual FISMA reporting instructions require agencies to develop policies and procedures for agency officials to follow when performing oversight of the implementation of security and privacy controls by contractors. Additionally, OMB requirements and NIST guidance call for agencies, as part of the information security program, to authorize the operation of information systems and explicitly accept any associated risks to organizational operations and assets, individuals, other organizations, and the nation, based on the implementation of an agreed-on set of security controls. According to NIST, the system security plan, the results of the security control assessment, and POA&Ms describing planned remedial actions provide the authorizing official with essential information needed to make a risk-based decision on whether to authorize operation of an information system or a designated set of common controls. For the 11 activities that we evaluated, 11 of the 24 agencies had fully defined the role of the CISO for all 11 activities. The other 13 agencies varied in their definitions of the CISO’s role, from defining the role for most activities (11 agencies) to a few activities (2 agencies). Table 1 outlines the extent to which each of the 24 federal agencies defined the role of the CISO in their information security policies in accordance with FISMA and other federal requirements and guidance. Each of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that risk to the agency’s information and information systems was assessed periodically. For example: The Department of Commerce (Commerce) assigned responsibility for developing and implementing a department-wide risk management strategy and implementing a cyber security risk management framework to the Office of Cyber Security, which is headed by the CISO. The Department of Veterans Affairs’ (VA) risk management policy stated that the CISO is responsible for working with other VA IT organizations to establish risk action plans, working with stakeholders on implementing those plans, and evaluating and monitoring the internal risk environment. The Social Security Administration (SSA) delegated responsibility for risk management to the Office of Information Security, which is headed by the SSA CISO. By defining the CISO’s role in periodic risk assessments, agencies will have greater assurance that the CISO is aware of the risks to essential computing resources and can make informed decisions about needed security protections. Twenty-two of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that risk-based information security policies and procedures were established. For example: The U.S. Department of Agriculture (USDA) information security program assigned responsibility for formulating and issuing departmental cyber security policies and procedures to the CISO. The Department of Transportation (DOT) CISO was responsible for providing management leadership in cybersecurity policy and guidance. Additionally, the CISO was responsible for reviewing and approving cybersecurity policies and procedures developed by departmental components. The General Services Administration assigned the CISO responsibility for annually reviewing and revising the agency’s information security policy, and for developing and publishing IT security procedural guides. However, two agencies—the Departments of Defense and Justice—did not define the CISO’s responsibilities for this activity in their policies: The Department of Defense (DOD) senior information security officer (SISO) told us that the responsibilities of the SISO organization included developing and maintaining policies and procedures; however, these responsibilities were not documented in DOD policy. The Department of Justice (DOJ) CISO indicated that the information security office was responsible for security policies and procedures; however, this was not described in the department’s information technology security policy. By ensuring that the CISO’s role is defined for establishing policies and procedures, these two agencies will have increased assurance that CISOs are able to effectively reduce risks to their information and information systems, and that the information security practices that are driven by these policies and procedures are consistently applied. Nineteen of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that plans for providing security for information systems were in place. For example: The Department of Education security policy assigned the CISO responsibility for ensuring that security authorization documents, including system security plans, are complete, consistent, and in compliance with security standards. The Department of Labor’s security policy stated that the information security team, which is headed by the CISO, reviews the system security plan for each information system as part of its authorization oversight responsibilities. The Small Business Administration assigned the CISO responsibility for reviewing system security plans and other system documentation to ensure that security requirements have been adequately addressed. However, five agencies—the Departments of Energy, the Interior, Transportation, and the Treasury; and the Environmental Protection Agency—did not define developing, reviewing, or updating system security plans as a CISO responsibility in their policies: Although the Department of Energy (DOE) delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO, the department’s cybersecurity program order did not document any responsibilities for the CISO in overseeing system security plans. In a written response, officials from the Department of the Interior (Interior) stated that CISO staff oversees security plans through the department’s central FISMA compliance repository. However, although Interior’s assessment and authorization package documentation policy stated that system authorization documentation is to be maintained in the repository, it did not document the CISO office’s responsibilities for oversight of this documentation, including security plans. DOT officials stated in a written response that the CISO’s office reviews a sample of system security plans and documentation annually, based on prior year audit findings or systems of significant criticality or impact. However, although DOT’s guide for security authorization and continuous monitoring stated that the CISO conducts oversight reviews of component cybersecurity programs, it did not indicate that security plans were included in these reviews. In a written response, officials from the Department of the Treasury (Treasury) stated that, although the department’s policy required FISMA reporting and other cybersecurity information, including security plans, to be reported to the CIO, the CISO organization actually collects, oversees, and manages this process. However, these responsibilities were not specified in policy. The Environmental Protection Agency (EPA) senior agency information security officer (SAISO) stated that the agency was working to implement a new process in which system authorization packages—which include security plans—would be routed through the SAISO organization for review. He indicated that the process was expected to be implemented in the summer of 2016. Until these five agencies appropriately define the role of the CISO in ensuring that system security plans are appropriately documented, these CISOs may be unable to effectively ensure that their agency’s officials are aware of system security requirements or whether controls are in place. Twenty-two of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that all employees received information security training. For example: Commerce assigned the Office of Cyber Security, headed by the CISO, the responsibility to maintain the department’s information security awareness and training program, including establishing requirements for training for operating units and monitoring compliance. DHS’s information security policy directive stated that the CISO is responsible for ensuring that department personnel, contractors, and others working on behalf of DHS receive information security awareness training. SSA assigned the CISO the responsibility to develop SSA’s security awareness training policy, provide information on training opportunities that meet the requirements of the policy, and oversee the implementation of the training program. However, two agencies—the Departments of Energy and the Treasury— did not define the CISO’s responsibilities for security awareness training in their policies: Although DOE delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO, the department’s Cybersecurity Awareness and Training Program policy did not define the roles and responsibilities of the CISO with respect to security awareness training. In a written response, Treasury officials stated that the department CISO collects and manages department-wide data on training completion. However, although Treasury policy states that bureaus are to provide materials and assistance to support the oversight and central reporting roles of the Treasury Cybersecurity Office, it did not specify that training completion data are to be provided. Additionally, officials stated that the CISO provides a web-based security awareness training tool for bureaus, but this was not documented in Treasury’s security policies. By defining the CISO’s role in ensuring that all users receive security awareness training, DOE and Treasury can better equip their CISOs to ensure that their agency personnel have a basic understanding of information security requirements to protect the systems they use. Twenty-two of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that security controls are tested periodically in accordance with FISMA and NIST guidance. For example: VA assigned the CISO responsibility for establishing and monitoring the department’s Information Security Continuous Monitoring program, including ensuring that reports are monitored and that issues identified are escalated for appropriate action. DHS’s security policy stated that the CISO is responsible for ensuring that organizational security testing plans are executed in a timely manner. The Office of Personnel Management’s (OPM) security policy and guidance indicated that the CISO is responsible for reviewing the results of periodic testing as part of the oversight of system authorization activities. However, two agencies—the Departments of Transportation and the Treasury—did not define the CISO’s responsibilities for ensuring that security controls are tested periodically across the agency in their policies: DOT officials stated in a written response that the CISO office annually tests a sample of security controls as part of its compliance activities. However, although DOT’s guide for security authorization and continuous monitoring stated that the CISO conducts oversight reviews of component cybersecurity programs, it did not indicate that the reviews included any oversight of security testing. Treasury officials stated in a written response that responsibility for security testing had been delegated to bureaus. They also stated that the security policy describes oversight of security testing by the CISO; however, although the policy stated that security controls are to be tested on an ongoing basis as part of a continuous monitoring process, it did not describe any responsibilities for the CISO or the CISO office for ensuring that security controls are periodically tested. If these two federal agencies define the CISOs role in ensuring that security controls are periodically tested, these officials will be better able to ensure that security controls have been implemented correctly, are operating as intended, and are producing the desired outcome with respect to meeting the security requirements of the agency. Twenty-three of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that remedial actions are documented and used to address identified deficiencies in security controls. For example: The Department of Housing and Urban Development assigned the Office of Information Technology Security, which is headed by the CISO, the responsibility for ensuring that POA&Ms for the security program and information systems are maintained and documented. Treasury’s information security policy stated that the CISO is responsible for monitoring information system weaknesses at the bureaus and implementation of corrective actions. Interior assigned responsibility for reviewing bureau- and office-level POA&Ms and ensuring that they comply with department-wide and OMB guidance to the CISO. Additionally, the CISO is responsible for ensuring that all bureau and office information systems’ weaknesses are adequately described and that planned corrective actions appropriately address the weaknesses. However, DOE did not identify who was responsible for ensuring that remedial actions are taken and are effective in its policies. Specifically, DOE delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO. However, the department’s cybersecurity program order did not specify any responsibilities for the CISO in overseeing remedial actions. The DOE CISO stated that overall responsibility for the remedial action process is assigned to the CIO, and that the CIO reviews POA&M reports for significant weaknesses. He also stated that the CISO uses POA&Ms to understand the environment of a particular site prior to going on a site visit. However, these responsibilities were not documented in DOE’s cyber security program policy. By defining the CISO’s role in ensuring that the agency has remediation processes, DOE will have greater assurance that their CISOs are able to ensure that control weaknesses affecting the agency’s information and information systems are being corrected and addressed in a timely manner. Twenty-two of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that the agency has procedures for detecting, reporting, and responding to security incidents. For example: Interior assigned responsibility for this activity to its Computer Incident Response Center, which is part of the Information Assurance Division led by the CISO. The U.S. Agency for International Development’s (USAID) security policies stated that the CISO is to establish and update incident response policies and procedures, and that the CISO is the central authority for coordinating and reporting sensitive and national security incidents for the agency. The National Science Foundation assigned the CISO responsibility for overseeing the Computer Incident Response Team during responses to reported incidents. However, two agencies—the Departments of Defense and State—did not define the CISO’s responsibilities for this activity in their policies: DOD assigned responsibility for incident response to Cyber Command, within U.S. Strategic Command. The DOD SISO told us that the SISO organization is involved in Cyber Command’s incident response activities; however, these responsibilities and activities were not documented in the department’s security policies. The Department of State (State) assigned responsibility for incident response to the Office of Cybersecurity in the Bureau of Diplomatic Security. The State CISO and the Director of the Office of Cybersecurity stated that the department has deliberately assigned certain operational cybersecurity functions and program responsibilities to the Bureau of Diplomatic Security. By defining the role of the CISO in ensuring that the agency has procedures for incident detection, reporting, and response, DOD and State will help their CISOs ensure that the agency’s information and information systems are adequately protected from cyber attacks. Seventeen of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that plans and procedures are in place to ensure recovery and continued operations of their information systems in the event of a disruption. For example: VA’s information security program policy stated that the CISO is responsible for working closely with IT and other business units to develop and maintain an enterprise business continuity program; managing the planning, design, maintenance of business continuity program projects and ensuring compliance with industry standards and regulatory requirements; monitoring the development of business continuity plans and reviewing plans to ensure compliance; and providing business and technical guidance relative to business continuity. DHS assigned the CISO responsibility for reviewing and approving contingency plans, and for ensuring that plans for ensuring the continuity of operations for information systems are developed and maintained. OPM’s security policy stated that the CISO reviews system contingency plans and requires that the results of contingency plan tests and exercises be provided to the CISO. However, seven agencies—the Departments of Commerce, Energy, Health and Human Services, the Interior, Justice, and the Treasury; and the Environmental Protection Agency—did not define the CISO’s responsibilities for contingency planning in their policies: Commerce assigned this responsibility to the Critical Infrastructure Protection Manager, and did not describe any role for the CISO in the department’s information technology security program policy. Although DOE delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO, the department’s continuity program order did not describe any responsibilities for the CISO. The Department of Health and Human Services’ policy assigned responsibility for updating and maintaining the information technology contingency plan to a Contingency Planning Coordinator; the policy did not describe the oversight responsibilities of the CISO. Interior officials stated in a written response that the CISO office funds a yearly audit which evaluates the implementation of security program activities, including continuity of operations activities, across the department; they also stated that the CISO office works with the Department’s Office of Emergency Management to ensure integration of IT system contingency plans with the larger departmental continuity of operations plans. However, these activities were not defined in Interior’s policies. The DOJ CISO stated that her office regularly reviews system contingency plans and test results. However, this responsibility was not documented in DOJ’s security policies. Treasury officials provided documentation showing that the CISO office tracks contingency plan testing activities as part of its oversight activities. However, these responsibilities were not described in policy. EPA’s SAISO told us that the agency plans to implement a procedure for reviewing authorization packages, including contingency plans; he indicated that the process was expected to be implemented in the summer of 2016. By not defining the CISO’s role in contingency planning, these seven agencies may hinder their CISOs’ ability to effectively ensure that information system contingency planning plans and procedures are in place, reducing the likelihood that these agencies will be able to successfully recover their systems in a timely manner in the event of a service disruption. Twenty-two of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that personnel with significant information security responsibilities were trained. For example: EPA assigned its SAISO the responsibility to develop and maintain role-based training, education, and credentialing requirements for personnel with significant information security responsibilities. USAID’s security policy stated that the CISO is responsible for establishing and managing an information security training program, including training for personnel with significant security responsibilities and maintaining training records. The General Services Administration assigned the CISO responsibility for ensuring that Information Systems Security Officers and Information Systems Security Managers receive applicable training specific to their information security responsibilities. However, two agencies—the Department of the Treasury and the Small Business Administration—did not define the CISO’s responsibilities for this activity in their policies: In a written response, Treasury officials stated that the department CISO collects and manages department-wide data on training completion. However, although Treasury policy states that bureaus are to provide materials and assistance to support the oversight and central reporting roles of the Treasury Cybersecurity Office, it did not specify that training completion data are to be provided. The Small Business Administration CIO told us that the CISO is responsible for overseeing role-based security training across the agency; however, this responsibility was not reflected in the agency’s security policies. Unless these two agencies define the roles of their CISOs in ensuring that personnel with significant security responsibilities receive appropriate training, their CISOs may be unable to ensure that these individuals have the knowledge, skills, and abilities consistent with their roles to protect the confidentiality, integrity, and availability of the information housed within the information systems to which they are assigned. Eighteen of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that contractor systems adhere to agency and federal information security requirements. For example: Thirteen agencies’ policies indicated that the CISO exercises oversight of contractor system security as part of the CISO’s overall oversight of the system authorization process. OPM’s security policy stated that the CISO is to conduct and coordinate information security audits at OPM and contractor facilities, and that the CISO organization reviews security clauses in contracts and statements of work. USDA assigned the CISO responsibility for conducting reviews of system documentation, including the system security plan, security assessment report, and plans of action and milestones, for all systems including contractor systems. However, six agencies—the Departments of Defense, Energy, the Interior, and the Treasury; the National Aeronautics and Space Administration; and the U.S. Agency for International Development—did not define the CISO’s responsibilities in ensuring that contractor systems met security requirements in their policies: DOD policies did not describe the responsibilities of the SISO in ensuring that contractor systems met security requirements. The DOD SISO told us that the information security oversight organization was not currently conducting inspections of unclassified contractor networks. He also stated that the SISO office monitors self-reported data from contractors; however, these responsibilities were not defined in DOD’s policies. The DOE CISO stated that the CISO exercises some oversight of contractor system security through FISMA reporting responsibilities. However, although DOE delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO, the department did not define the CISO’s responsibilities for oversight of contractor system security in its policies. In a written response, Interior officials stated that contractor systems are included in the authorization process, and that the CISO office oversees the authorization activities through yearly program audits and the audit activities of the Compliance and Audit Management Branch. However, these activities were not defined in Interior’s policies. Treasury’s information technology security program specified that it applied to contractor systems and department-owned systems; however, it did not define the CISO’s role in ensuring that contractor systems met security requirements. At the National Aeronautics and Space Administration, the SAISO issued the agency’s policy for conducting security assessments of third-party information systems. However, the policy did not define the SAISO’s responsibilities for oversight of contractor security. USAID’s policy stated that responsibility for oversight of contractor system security was assigned to the contracting officer’s representative; the policy did not describe any role for the CISO or CISO office in this process. The USAID CISO agreed, and stated that the CISO had no way to verify that contractors were meeting security requirements. Because these six agencies have not defined their CISOs’ responsibilities for oversight of contractor systems security, increased risk exists that weaknesses in these agencies’ contractor-operated systems may go undetected and unresolved. Twenty of the 24 agencies defined the responsibilities of the CISO or CISO office in ensuring that information systems are authorized to operate in accordance with federal requirements. For example: The Department of Labor’s information security organization, headed by the CISO, administers the security authorization oversight process, which includes security plan reviews and verification of a sample of security controls. The Department of State’s information security policies state that the Information Assurance office, which is headed by the CISO, is responsible for ensuring that all departmental information systems go through the approved system authorization process. The Nuclear Regulatory Commission assigned the CISO responsibility for ensuring that information security risks are managed consistently throughout the agency by being incorporated into the system authorization process. However, four agencies—the Departments of Energy, the Interior, and the Treasury; and the Environmental Protection Agency—did not define the CISO’s role in ensuring that systems were authorized in their policies: Although DOE delegated the authority to carry out the responsibilities of the CIO under FISMA, including developing and maintaining the DOE-wide information security program, to the DOE CISO, the department’s cybersecurity program order did not describe any specific roles or responsibilities for the CISO in ensuring that information systems are authorized to operate. Interior officials stated in a written response that the CISO office oversees authorization activities through yearly program audits and the audit activities of the Compliance and Audit Management branch. However, these activities were not defined in Interior’s policies. Treasury’s information security policy indicated that the CISO is responsible for implementing the IT security program and performing compliance oversight, but did not describe any oversight responsibilities for the system authorization process beyond this general statement. In a written response, officials stated that, although the department’s policy required FISMA reporting and other cybersecurity information to be reported to the CIO, the CISO organization actually collects, oversees, and manages this process; they also stated that the CISO office tracks the status of security authorization activities as part of its oversight activities. However, these responsibilities were not specified in policy. EPA’s information security policy stated that the SAISO is to develop, implement, and maintain security authorization and reporting capabilities; however, it did not describe any role for the SAISO in the authorization process. The agency planned to update its processes to ensure that authorization packages were vetted through the SAISO’s office. The EPA SAISO indicated that the process was expected to be implemented in the summer of 2016. Unless CISOs at these four agencies have a clear role in system authorization decisions, the agencies will face greater difficulty ensuring that such decisions appropriately consider information security risks. Agency CISOs identified a number of challenges to their authority. Specifically, in our survey of 24 agency-level CISOs, the following factors were frequently cited as presenting challenges to CISOs’ ability to effectively carry out their responsibilities to ensure that information security program activities are implemented: (1) competing priorities between agency operations and information security, (2) coordination with component organizations and other offices, (3) availability of security- related information from component organizations and IT contractors, (4) oversight of indirect reports and IT contractors, and (5) the position of the CISO in the agency’s hierarchy. Respondents also reported challenges related to other factors that did not directly affect their authority but nevertheless may limit their ability to carry out their responsibilities. There are several government-wide initiatives under way that are intended to help address some of these challenges. However, although OMB has responsibility under FISMA for providing guidance to federal agencies, it has not issued guidance clarifying how agencies should implement recent provisions in federal law aimed at strengthening their oversight of information security activities or the role of agency CISOs in carrying them out. This lack of clarity further hinders CISOs’ ability to address challenges to their authority, including balancing operational and security needs, overseeing security activities, obtaining adequate and timely information, and ensuring that senior managers are aware of information security risks facing the agency. Eighteen CISOs reported that competing priorities between agency operations and information security challenged their ability to exercise their responsibilities to ensure the implementation of the agency-wide information security program to a large or moderate extent, as shown in figure 1 below. Respondents identified several specific challenges related to this factor. For example, one respondent stated that security personnel at the component level report to the component’s management chain rather than to the CISO; consequently, they are often driven by the operational imperatives of the component agency rather than the security priorities of the department. The respondent also noted that programs often view cybersecurity as a drain on limited resources. Another CISO explained that agency operations drive procurements at a faster pace than is feasible for their cyber team to track. Another CISO expressed a similar sentiment, stating that technology is advancing rapidly and security is often seen as getting in the way of progress. Another respondent noted that the operational priorities of the agency tend to favor maintaining existing operations rather than correcting weaknesses and vulnerabilities in a timely fashion. According to NIST SP 800-39, effective risk management requires an organization’s mission/business processes to explicitly account for information security risk when making operational decisions. When organizations make operational decisions without adequately considering information security risk, CISOs are hindered in ensuring that appropriate security controls are applied or that weaknesses are addressed prior to new systems or technology being deployed. About half of the CISOs we surveyed reported challenges when coordinating with component organizations or with other offices (e.g., program, human capital, and contracting offices). Specifically, 13 reported that coordination with component organizations was challenging to a large or moderate extent, and 12 reported that coordination with other offices was challenging to a large or moderate extent, as shown in figure 2 below. Respondents identified several specific challenges related to these factors. For example: Coordination with component organizations. One CISO stated that risk decisions made by authorizing officials or system owners within components often exceeded the department’s standards for risk acceptance, because component organizations often had risk tolerances that were not consistent with the department’s. Another stated that coordinating with component organizations can slow incident response efforts, depending on the components’ resources, expertise, and priorities. Another respondent noted that the department-level CISO lacks the authority to mandate that components implement decisions that have to be applied across the enterprise, although the CISO also noted that considerable support could be gained through using a collaborative approach. Another respondent indicated that system development life cycle management is not a mature process at many component organizations, and that some components do not apply a formal system development life cycle process. Coordination with other offices. One CISO noted that other offices that are responsible for enterprise controls have not always fully assumed the responsibility for overseeing, testing, and evaluating those controls. Another CISO stated that security controls that depend on other offices in the agency are not always recognized by those offices as priorities—or even as responsibilities—because the requirements do not arise from their own chain of authority. Another CISO stated that program offices at his agency frequently challenge the CISO’s authority to oversee contractors’ implementation of security controls in order to maintain the business relationship with the contractor. One respondent noted that, in system development efforts, security is seen by the project as a burden, making it difficult for the security organization to conduct oversight of the project life cycle. NIST guidance states that organizations may choose to delegate authority, responsibility, and decision-making power for information security to individual subordinate organizations, such as bureaus or components within a federal agency, in order to accommodate subordinate organizations with divergent mission/business needs and operating environments. However, if CISOs face difficulties coordinating with component organizations or other offices within their agencies, their ability to help ensure that information security risks are being identified and mitigated across the enterprise may be hindered. Nine CISOs reported challenges to a moderate or large extent when receiving information security information from component organizations, and twelve reported that receiving information security information from IT contractors as challenges. Figure 3 below shows the extent to which CISOs identified these factors as challenging. Respondents identified several specific challenges related to these factors. For example: Availability of information from components: One respondent stated that a number of networks and systems are independently managed and maintained by components, which are frequently reluctant to share information with the department-level security organization. Another noted that the department-level security organization does not always have visibility into the networks or systems at component organizations. Another CISO stated that components do not always share complete information on security incidents with the central security organization, and that some do not involve the department- level security organization in incident investigations. The respondent further noted that system authorization data are self-reported by component organizations, making it difficult for the CISO organization to verify that the components are complying with departmental policy. Availability of information from IT contractors: One respondent noted that contractual limitations can prevent access to information that would normally be available in a government-owned and -operated environment. Another stated that, even when language requiring contractors to provide the agency access to security information is included in contracts, it can still be very difficult to obtain necessary information from contractors. One CISO noted that there are no means by which the agency can validate data reported by contractors. According to NIST guidance for managing information security risk, it is important to ensure that risk-related information is shared among subordinate organizations and with the parent organization because the risk decisions by subordinate organizations may have an effect on the organization as a whole. When CISOs have difficulty receiving adequate information security information from components or contractors, they may lack all of the information that they need to effectively carry out their responsibilities to oversee the security program activities for which those components or contractors are responsible. Half of CISOs reported that their ability to exercise oversight of individuals and offices outside of the CISO’s direct reporting structure challenged them to a large or moderate extent. Additionally, half of the CISOs we surveyed also reported challenges related to oversight of IT contractors. Figure 4 below identifies the extent to which CISOs identified these factors as challenging. Respondents identified several specific challenges related to these factors. For example: Oversight of indirect reports. One respondent indicated that the CISO lacks the authority to hold indirect reports, such as information system security officers (ISSO), accountable for carrying out their information security responsibilities. Another stated that the personnel supporting ongoing and deployed projects are not accountable to the CISO; rather, they are overseen by operations and engineering teams, whose priorities are focused on operations and delivering functionality and not on security. Oversight of IT contractors. For example, one respondent stated that contractors not directly assigned to IT security reported to their sponsor program offices, and consequently oversight activities had to be coordinated through program managers, contracting officers, or their representatives. Another stated that the CISO did not have control over the cybersecurity contract that supports the information security organization. One CISO expressed difficulties in establishing a consistent interpretation of security requirements across component agencies’ contracting organizations. Another also stated that the security organization lacks the authority to validate security documentation submitted by contractors. NIST guidance states that leaders and managers at all levels of an organization need to understand their responsibilities and be held accountable for managing information security risk. When CISOs experience difficulties overseeing the information security responsibilities of individuals outside of their reporting hierarchy or of IT contractors, they can be hindered in their ability to ensure that the actions of these individuals comply with the agency’s security policies or sufficiently address the risks facing the agency. Ten of the 24 CISOs reported that their position in the agency hierarchy challenged their ability to carry out their responsibilities to a large or moderate extent. Figure 5 below identifies the extent to which CISOs identified their position in the agency hierarchy as challenging. Respondents identified several specific challenges related to this factor. For example, one respondent noted that the department-level CISO resided under a department under secretary, which often blurred the lines of authority and accountability between the IT organization and other components. Another indicated that being positioned higher in the organization would make it easier to gain concurrence and support for security initiatives, and that the CISO’s current position made it difficult to ensure that identified weaknesses are addressed and that incidents are being handled appropriately. Another respondent noted that the CISO’s placement in the organization can limit their ability to elevate significant information security risks to upper management. However, one noted that an increased focus on cybersecurity issues at the agency in recent months has resulted in the CISO having greater access to agency leadership. If CISOs are unable to hold component and office personnel accountable for taking action or elevate security concerns to upper management, they will be challenged in their ability to ensure that agency leaders have a clear understanding of the agency’s risk profile, and agencies may be less able to effectively manage and respond to these risks. The 24 CISOs also reported that other factors posed challenges to their ability to carry out their responsibilities effectively, including the following examples: Lack of sufficient staff. CISOs identified challenges with having insufficient personnel to oversee security activities effectively. For example, one CISO noted that the information security office did not have enough personnel to oversee the implementation of the number and scope of requirements described in NIST SP 800-53 as well as to respond to FISMA audits and OMB data calls. Another noted that the agency’s security operations center did not have enough staff to operate around the clock. Recruiting, hiring, and retaining security personnel. One CISO stated that the agency could not offer salaries that are competitive with the private sector for candidates with high-demand technical skills. Another described a similar challenge, stating that the government’s General Schedule system restricts agencies from offering bonuses commensurate with what private sector organizations can offer. Additionally, another respondent stated that, although hiring security personnel with less experience is cheaper than hiring at higher grades, the security organization has to devote significant time and effort to bringing new staff up to speed; additionally, once those staff obtain skills and experience, they often begin looking for new jobs where they can receive a higher salary. Expertise of security personnel. CISOs described challenges with ensuring that personnel in highly technical roles have sufficient training opportunities and expertise in the skill sets needed. Others noted that a lack of expertise among staff limited their ability to evaluate risk, support internal testing, or oversee the security of IT acquisitions. Two noted that ISSOs at their agencies often are assigned these duties in addition to other responsibilities; others noted that ISSOs lack security skills or are not sufficiently trained. Another stated that the personnel supporting incident response at the agency had relatively little experience. Financial resources. One CISO stated that the information security organization is funded through components’ contributions to the department’s working capital fund, which creates tension between the department-wide security needs and the operational priorities of the component agencies. Another stated that the CISO organization does not have a dedicated budget, but is funded out of the budget for the CIO organization. Another respondent stated that the CISO’s ability to drive the agency to resolve POA&Ms in a timely manner is limited in part due to financial constraints. One respondent stated that his financial resources are insufficient for the human resources, training, and necessary tools and technologies needed to provide sufficient oversight of security authorization decisions made by component agencies. Other CISOs stated that efforts to test security controls and remediate weaknesses are hampered due to budgetary constraints. In accordance with their statutory responsibilities under FISMA, OMB and NIST have taken steps to assist federal agencies in implementing information security activities, and have instituted initiatives that can assist federal agencies in addressing challenges related to human and financial resources. For example: The National Initiative for Cybersecurity Education: This is an interagency effort coordinated by NIST to improve cybersecurity education, including efforts directed at training, public awareness, and the federal cybersecurity workforce. This initiative is intended to support the federal government’s evolving strategy for education, awareness, and workforce planning and provide a comprehensive cybersecurity education program. Cybersecurity National Action Plan: Announced by the White House in February 2016, the Cybersecurity National Action Plan is intended to foster long-term improvements in the cybersecurity across the federal government, the private sector, and individuals. Among other things, the plan announces (1) the establishment of the Commission on Enhancing National Cybersecurity, which is to make recommendations on actions to enhance cybersecurity awareness and protections throughout the private sector and at all levels of government, to protect privacy, to maintain public safety and economic and national security, and to empower Americans to take better control of their digital security; (2) the creation of the Federal Chief Information Security Officer position to drive cybersecurity policy, planning, and implementation across the federal government; (3) efforts to enhance cybersecurity education and training nationwide and hire more cybersecurity experts to secure federal agencies; and (4) a proposal for $19 billion of funding for cybersecurity in fiscal year 2017, a 35 percent increase over fiscal year 2016. Cybersecurity Strategy and Implementation Plan: Issued in October 2015, the Cybersecurity Strategy and Implementation Plan was created as a result of the 30-day Cybersecurity Sprint initiated in June 2015. The plan is intended to identify and address critical cybersecurity gaps and emerging priorities, and make specific recommendations to address those gaps and priorities. The plan is to strengthen federal civilian cybersecurity through five objectives: (1) prioritized identification and protection of high-value information and assets, (2) timely detection of and rapid response to cyber incidents, (3) rapid recovery from incidents when they occur and accelerated adoption of lessons learned from the Cybersecurity Sprint assessment, (4) recruitment and retention of cybersecurity workforce talent, and (5) efficient and effective acquisition and deployment of existing and emerging technology. If effectively implemented, these initiatives should help address several of the challenges identified by CISOs, particularly those related to insufficient numbers of staff; recruiting, hiring, and retaining qualified staff; personnel expertise; and funding. However, they do not address concerns raised by CISOs regarding their authority to carry out their responsibilities. Recognizing the importance of oversight of agency-wide information security activities, in enacting FISMA 2014 Congress added two new requirements that agency heads ensure that (1) senior agency officials carry out their information security responsibilities and (2) all agency personnel are held accountable for complying with the agency-wide information security program. Given CISOs’ statutory responsibilities for ensuring that their agencies comply with the requirements of the law, it is vitally important to address the challenges to their authority that the CISOs have identified, such as ensuring that the agency appropriately considers security in operational decisions; coordinating with and overseeing security activities of component organizations, other offices, and contractors; and elevating security concerns to upper management. According to OMB, recent guidance addresses the implementation of these new requirements. Specifically, OMB officials stated that the office’s June 2015 memorandum that provides implementation guidance for the recently enacted IT reform legislation, commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA), addresses the CISO’s role in ensuring that senior officials are held accountable because it is intended to strengthen the agency CIO’s accountability and oversight for information security across the agency. They added that under FISMA, this accountability and involvement would necessarily be delegated to the agency CISO. Officials also stated that OMB’s efforts to oversee agencies’ implementation of the requirements in the memo, including PortfolioStat sessions, included discussions with agency CIOs and CISOs regarding whether they have been given appropriate authority. They further stated that the annual FISMA reporting instructions issued by the office contain guidance on how agencies can ensure that CISOs are assigned appropriate responsibility and authority to ensure that information security activities are implemented. Officials also stated that the CyberStat meetings—in which OMB and DHS meet with agency CIOs, CISOs, and other agency officials to discuss and assist in developing focused strategies for improving their agency’s cybersecurity posture—focus on FISMA-related security metrics and issues where the CISO should be involved. In July 2016, OMB issued its update to Circular A-130, Managing Information as a Strategic Resource. Among other things, the circular requires agencies to ensure that the CIO designates a senior agency information security officer to develop and maintain an agency-wide information security program in accordance with FISMA 2014. The circular reiterates the new FISMA 2014 requirement for agencies to implement policies and procedures to ensure that all personnel are held accountable for complying with agency-wide information security and privacy requirements and policies and specifies that this requirement be part of the agency-wide information security program. However, neither the FITARA implementation guidance, FISMA reporting instructions, nor CyberStat meetings provide guidance for federal agencies on how to implement the new FISMA 2014 requirements or the CISO’s role in carrying them out, nor do they indicate that OMB is evaluating CISOs’ authority. Furthermore, while the updated Circular A- 130 restates the new requirement to ensure that all personnel are held accountable, it does not provide guidance clarifying how this requirement should be implemented. The lack of clarity about how agencies are expected to implement these new requirements further hinders CISOs’ ability to address the challenges to their authority that they reported facing. Additional guidance from OMB addressing how agencies should ensure that officials carry out their responsibilities and personnel are held accountable for complying with the agency-wide information security program could assist CISOs in more effectively carrying out their duties in the face of numerous challenges. Defining the role of a federal agency CISO is key to ensuring that this official is able to ensure that agency-wide information security programs are developed, documented, and implemented. Most agencies documented the role of the agency CISO in ensuring the implementation of security program activities in their information security policies; however, most agencies also had gaps in policies defining their CISO’s responsibilities, leaving it unclear what role, if any, these officials play in some aspects of agencies’ information security programs. By not fully defining this role, agencies may be unable to ensure that their CISOs are able to effectively oversee the implementation of their information security programs. Although federal law and agency policies vest CISOs with responsibility for ensuring that agency-wide information security programs are developed, documented, and implemented, many CISOs reported challenges to their authority to effectively carry out these responsibilities, such as difficulties in coordinating with component organizations or other offices, obtaining reliable and timely information from other entities within the agency, and an inability to raise concerns to agency leadership. They also cited concerns in having adequate staff with relevant expertise and sufficient resources to implement security requirements. These can limit CISOs’ ability to effectively ensure that the information security program is implemented and that agency-wide information security risk is managed appropriately. Several government-wide initiatives that are under way can address issues related to staffing and financial resources if fully implemented. However, OMB’s current implementation guidance does not address how to implement the new FISMA 2014 requirements or the CISO’s role in carrying them out, nor does it identify how OMB will evaluate the role of the CISO. Further guidance from OMB could assist agencies in making sure that CISOs have adequate authority and could help ensure that agencies are fully defining the role of the CISO with respect to all elements of their information security programs. To assist CISOs in carrying out their responsibilities, we recommend that the Director of OMB issue guidance for agencies’ implementation of the FISMA 2014 requirements to ensure that (1) senior agency officials carry out information security responsibilities and (2) agency personnel are held accountable for complying with the agency-wide information security program. This guidance should clarify the role of the agency CISO with respect to these requirements, as well as implementing the other elements of an agency-wide information security program, taking into account the challenges identified in this report. We are also making 33 recommendations to 13 of the 24 departments and agencies in our review to ensure that the role of the CISO is defined in agency policy in accordance with FISMA. Appendix II contains these recommendations. We provided a copy of a draft of this report to OMB and all 24 departments and agencies for review and comment. We received written comments from 12 agencies which are reprinted in appendices III through XIV. We received comments by email from 5 agencies and no comments from the remaining agencies. We also received technical comments from three agencies that we incorporated into the report as appropriate. Of the 13 agencies to which we made specific recommendations, 12 concurred with our recommendations and 9 identified steps that they are taking or plan to take to address them. One agency, DOD, did not concur or partially concurred with the three recommendations we made to it. For a summary of each of the 13 agencies’ comments and our response, please see appendix II. In comments provided via e-mail on July 29, 2016, by OMB’s audit liaison in the Office of General Counsel, OMB stated that it partially concurred with our recommendation. OMB also stated that it believes that its annual FISMA 2014 guidance provides sufficient and clear details on the expectations for agencies, to include procedures for overseeing and managing their information security programs, and that the guidance incorporates agency feedback and information security best practices to better reflect challenges and solutions within the current government operating environment. OMB noted that developing prescriptive guidance to address or streamline variances in information security management practices may unintentionally hamper agencies’ ability to conduct their missions. It added that, in place of issuing such guidance, OMB plans to continue utilizing several oversight mechanisms to drive performance and address challenges, including quarterly FISMA performance reviews and face-to-face CyberStat Reviews. We disagree that existing guidance and oversight mechanisms provide sufficient clarity for agencies on how to implement the new FISMA 2014 provisions. As stated in this report, neither the annual FISMA guidance nor CyberStat meetings provide guidance for federal agencies on how to implement the new FISMA 2014 requirements or the CISO’s role in carrying them out. In addition, OMB’s recently revised Circular A-130 is clear that the CISO is to have a role in ensuring that all personnel are held accountable for complying with information security requirements, but it does not provide guidance on how agencies are to implement this requirement. As we note in our report, CISOs are not always able to effectively hold personnel accountable for complying with information security requirements. Accordingly, additional guidance from OMB addressing how agencies should ensure that officials carry out their responsibilities and personnel are held accountable for complying with the agency-wide information security program could help address many of the challenges to authority identified by federal CISOs. We therefore believe our recommendation is warranted. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XV. Our objectives were to (1) identify the key responsibilities of federal chief information security officers (CISO) established by federal law and guidance and determine the extent to which federal agencies have defined the role of the CISO in accordance with this law and guidance and (2) describe key challenges of federal agency CISOs in fulfilling their responsibilities to ensure that agency-wide information security programs are developed, documented, and implemented. The scope of our review included the 24 major departments and agencies covered by the Chief Financial Officers Act of 1990. To identify the key responsibilities of federal CISOs established by federal law and guidance, we reviewed relevant laws including relevant provisions of the Federal Information Security Management Act of 2002 (FISMA 2002), the Federal Information Security Modernization Act of 2014 (FISMA 2014) and the Federal Information Technology Acquisition Reform Act (FITARA). In addition, we reviewed relevant special publications from the National Institute of Standards and Technology (NIST) addressing information security management topics and Office of Management and Budget (OMB) memoranda and circulars addressing federal information security. To determine the extent to which federal agencies have defined the role of the CISO in accordance with law and guidance, we collected information security policies and procedures from the 24 major departments and agencies. We then evaluated each agency’s policies to determine responsibility for ensuring that information security activities are implemented had been assigned to the CISO in accordance with FISMA 2014. In addition, we collected and reviewed each agency’s current organization chart(s) depicting the CISO’s position relative to the head of the agency, other senior officials, and component CISOs, if applicable. We also asked each agency to supply the name of each of the individuals who had served as CISO at the agency since 2010. To describe key challenges of federal agency CISOs in exercising their authority to ensure that agency-wide information security programs are developed, documented, and implemented, we developed and administered a web-based survey instrument to the CISO at each of the 24 major departments and agencies in coordination with our survey methodology expert. In the survey, we asked CISOs to identify whether they felt that they had sufficient levels of responsibility and authority. In addition, we asked CISOs to identify factors that challenged them in exercising their authority, and to identify specific challenges related to these factors. We then reviewed the responses provided by the CISOs and interviewed each of them in order to validate responses from the survey and to obtain additional insight into the challenges they identified. From the survey and interview responses, we analyzed CISOs’ comments to identify challenges common across multiple agencies. To minimize errors that might occur from respondents interpreting our questions differently from our intended purpose, we pretested the questionnaire in person and by phone with the CISOs at three agencies. The selection of agencies for pretesting was based on agency availability to assist us with pretesting, variation in size of agency, and variation in agency security governance models (i.e., centralized or decentralized). During these pretests, we asked each CISO to complete the survey as we listened to the process. We then interviewed the respondents to check whether the questions were applicable, clear, unambiguous, and easy to understand. We then revised the survey based on the feedback provided during the pretests prior to sending the final survey to the agency CISOs. All 24 Chief Financial Officers Act agency CISOs completed the final survey, although not all survey respondents answered every question. The practical difficulties of conducting any survey may introduce non- sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of respondents who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such non-sampling errors. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and addressed such issues as necessary. We analyzed responses to closed-ended questions by counting the responses for all agencies. For questions that asked respondents to provide a narrative answer, we compiled the answers in one spreadsheet that was analyzed and used as examples in the report. To assess any OMB efforts to provide guidance on the implementation of new FISMA 2014 requirements for agencies to ensure that senior officials carry out their responsibilities and to hold personnel accountable, we analyzed OMB memoranda establishing requirements for federal information security to determine whether they addressed matters of information security governance and the role of the CISO. We also met with representatives from OMB to obtain their views on the new FISMA requirements, the role of CISOs in carrying them out, and the role of OMB in providing guidance for agencies in implementing the new requirements. We conducted this performance audit from June 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To ensure that the role of the chief information security officer (CISO) is defined in department policy in accordance with the Federal Information Security Modernization Act of 2014 (FISMA 2014), we recommend that the Secretary of Commerce take the following action: Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. In its comments on a draft of this report, the Department of Commerce concurred with our recommendation and stated that it planned to update the department’s IT policy and program documents that define the roles and responsibilities of the CISO by September 30, 2017, with progress to be tracked quarterly. The department’s comments are reprinted in appendix III. The department also provided technical comments which we have incorporated into the final report as appropriate. To ensure that the role of the senior information security officer (SISO) is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of Defense take the following three actions: Define the SISO’s role in department policy for ensuring that information security policies and procedures are developed and maintained. Define the SISO’s role in department policy for ensuring that the department has procedures for incident detection, response, and reporting. Define the SISO’s role in department policy for oversight of security for information systems that are operated by contractors on the department’s behalf. In its comments on a draft of this report, DOD stated that it did not concur with the first recommendation and partially concurred with the other two. Our draft report included five additional recommendations to DOD: that the department define the SISO's role in department policy (1) for ensuring that subordinate security plans are documented for the department's information systems; (2) for ensuring that security controls are tested periodically; (3) for ensuring that the department has a process for planning implementing, evaluating, and documenting remedial actions; (4) for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption, and (5) in the periodic authorization of the department's information systems. DOD did not concur with four of these draft recommendations and partially concurred with one of them. DOD stated that the SISO organization maintains a knowledge service that provides component organizations with DOD-specific assignment values for contingency planning security controls, implementation guidance, and assessment procedures, and that the department’s risk management framework policy defines the SISO’s role in security planning, security control testing, remedial actions, and system authorization activities. We reviewed DOD’s cybersecurity instruction and risk management framework policy and confirmed that the department’s statements are accurate. Therefore, we have made appropriate changes in the report to reflect this information, including withdrawing these five recommendations from the final report. DOD did not concur with our recommendation that the department define the SISO’s role in department policy for ensuring that information security policies and procedures are developed and maintained. The department’s response stated that according to the DOD cybersecurity instruction, the SISO is responsible for directing and coordinating the DOD cybersecurity program and carrying out the CIO’s responsibilities in accordance with FISMA 2014; accordingly, the SISO is responsible for developing and maintaining information security policies as stated in FISMA 2014. The department also noted that it had provided us with an organization chart showing that the DOD SISO organization included a cybersecurity policy division. However, we still believe that the SISO’s role with respect to information security policies and procedures is not sufficiently defined. This is because neither the cybersecurity instruction nor any other policy document provided to us described any specific responsibilities of the SISO in ensuring that information security policies and procedures are developed and maintained, nor did they describe the responsibilities of the cybersecurity policy division. The SISO is the official with responsibility for directing and coordinating the department’s cybersecurity program. Therefore, it is important that the SISO’s role in ensuring that information security policies and procedures are developed and maintained be clearly defined in DOD policy. We therefore believe that our recommendation is warranted. DOD partially concurred with our recommendation that it define the SISO's role in department policy for ensuring that the department has procedures for incident detection, response, and reporting. The department stated that responsibility for managing the incident handling program has been assigned to Cyber Command within U.S. Strategic Command by the Secretary of Defense, and that the department’s incident handling program is documented in Chairman of the Joint Chiefs of Staff Manual 6510.01 B, "Cyber Incident Handling Program." The department also noted that the SISO organization plans to publish a new cyber incident handling manual to replace the existing Chairman of the Joint Chiefs of Staff Manual. It will be important for the new manual to clearly define the role of the SISO in the incident handling process. We therefore continue to believe that our recommendation is warranted. DOD partially concurred with our recommendation that it define the SISO's role in department policy for oversight of contractor system security, and stated that the SISO organization has developed and maintains policies providing direction to DOD components on oversight of contractor system security, including policies on defense industrial base cyber security/information assurance activities and on the security of unclassified DOD information on non-DOD information systems. DOD also stated that the SISO will review the CIO and component SISO responsibilities in the regularly scheduled updates to these policies. The department further stated that its national industrial security program operating manual describes that the Director of the Defense Security Service monitors and oversees information security practices of contractors and vendors processing classified DOD information, and that the Defense Federal Acquisition Regulation Supplement Subpart 204.73 requires contractors to implement security requirements. However, neither the policies on defense industrial base cyber security/information assurance activities or the security of unclassified DOD information on non-DOD information systems, the national industrial security program operating manual, nor the Defense Federal Acquisition Regulation Supplement specify any roles or responsibilities for the DOD SISO in the area of contractor system security. While it may be appropriate to review the responsibilities of the DOD CIO and component SISOs, because the SISO is the official with responsibility for directing and coordinating the department’s cybersecurity program, it is important that the responsibilities of the SISO in overseeing the security of contractor systems be clearly defined in DOD policy. We therefore believe that our recommendation is warranted. DOD’s comments are reprinted in appendix IV. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of Energy take the following six actions: Define the CISO’s role in department policy for ensuring that subordinate security plans are documented for the department’s information systems. Define the CISO’s role in department policy for ensuring that all users receive information security awareness training. Define the CISO’s role in department policy for ensuring that the department has a process for planning implementing, evaluating, and documenting remedial actions. Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. Define the CISO’s role in department policy for oversight of security for information systems that are operated by contractors on the department’s behalf. Define the CISO’s role in department policy in the periodic authorization of the department’s information systems. In its comments on a draft of this report, DOE concurred in principle with our recommendations, and stated that it is meeting implementation requirements as stated in FISMA 2014 through delegation memoranda and other supporting directives in a manner that supports the department’s diverse missions while focusing on ensuring an enterprise- wide approach to cyber security. The department also agreed that further codification of the role of the CISO is appropriate within department policies. DOE stated that it is undertaking a review of its cybersecurity program order and will consider GAO’s recommendations during that process. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of Health and Human Services take the following action: Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. In its comments on a draft of this report, HHS concurred with our recommendation and stated that the updates to policy are to be made in conjunction with anticipated revisions of NIST SP 800-53, revision 5. The department’s comments are reprinted in appendix VI. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of the Interior take the following four actions: Define the CISO’s role in department policy for ensuring that subordinate security plans are documented for the department’s information systems. Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. Define the CISO’s role in department policy for oversight of security for information systems that are operated by contractors on the department’s behalf. Define the CISO’s role in department policy in the periodic authorization of the department’s information systems. In its comments on a draft of this report, the Department of the Interior concurred with our four recommendations and stated that it is currently updating policy to ensure that they are implemented. The department’s comments are reprinted in appendix VIII. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Attorney General take the following two actions: Define the CISO’s role in department policy for ensuring that information security policies and procedures are developed and maintained. Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. In its comments on a draft of this report, DOJ concurred with our recommendations and stated that the department has clarified the CISO responsibilities in a revised policy which is expected to be released in August 2016. The department’s comments are reprinted in appendix IX. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of State take the following action: Define the CISO’s role in department policy for ensuring that the department has procedures for incident detection, response, and reporting. In its comments on a draft of this report, the Department of State stated that it concurred with our finding and plans to correct policy guidance to reflect that the Security Infrastructure/Cybersecurity/ Monitoring and Incident Response Division within the Bureau of Diplomatic Security is the entity responsible for incident response. Further, it stated that the bureaus of Information Resource Management and Diplomatic Security are continuing to work to further coordinate communications for incident response. The department’s comments are reprinted in appendix X. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of Transportation take the following two actions: Define the CISO’s role in department policy for ensuring that subordinate security plans are documented for the department’s information systems. Define the CISO’s role in department policy for ensuring that security controls are tested periodically. In comments on a draft of this report provided via e-mail on July 22, 2016, by an Audit Relations Analyst in DOT’s Audit Relations and Program Improvement office, the department stated that it concurred with the findings and recommendations in our report. To ensure that the role of the CISO is defined in department policy in accordance with FISMA 2014, we recommend that the Secretary of the Treasury take the following seven actions: Define the CISO’s role in department policy for ensuring that subordinate security plans are documented for the department’s information systems. Define the CISO’s role in department policy for ensuring that all users receive information security awareness training. Define the CISO’s role in department policy for ensuring that security controls are tested periodically. Define the CISO’s role in department policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. Define the CISO’s role in department policy for ensuring that personnel with significant security responsibilities receive appropriate training. Define the CISO’s role in department policy for oversight of security for information systems that are operated by contractors on the department’s behalf. Define the CISO’s role in department policy in the periodic authorization of the department’s information systems. In comments on a draft of this report provided via e-mail on August 3, 2016, a representative from Treasury’s Office of the Associate CIO stated that Treasury concurred with our recommendations. The department also provided technical comments which we have incorporated into the final report as appropriate. To ensure that the role of the senior agency information security officer (SAISO) is defined in agency policy in accordance with FISMA 2014, we recommend that the Administrator of the Environment Protection Agency take the following three actions: Define the SAISO’s role in agency policy for ensuring that subordinate security plans are documented for the department’s information systems. Define the SAISO’s role in agency policy for ensuring that plans and procedures are in place to ensure recovery and continued operations of the department’s information systems in the event of a disruption. Define the SAISO’s role in agency policy in the periodic authorization of the department’s information systems. In its comments on a draft of this report, the Environmental Protection Agency agreed with our report’s recommendations and stated that the agency expected to implement them by July 29, 2016. The agency’s comments are reprinted in appendix XI. To ensure that the role of the SAISO is defined in agency policy in accordance with FISMA 2014, we recommend that the Administrator of the National Aeronautics and Space Administration take the following action: Define the SAISO’s role in agency policy for oversight of security for information systems that are operated by contractors on the agency’s behalf. In its comments on a draft of this report, NASA concurred with our recommendation and stated that the agency expects to implement it by December 9, 2016. NASA’s comments are reprinted in appendix XII. To ensure that the role of the CISO is defined in agency policy in accordance with FISMA 2014, we recommend that the Administrator of the Small Business Administration take the following action: Define the CISO’s role in agency policy for ensuring that personnel with significant security responsibilities receive appropriate training. In comments on a draft of this report provided via e-mail on July 22, 2016, a program manager in SBA’s Office of Congressional and Legislative Affairs stated that the agency agreed with our recommendation and had no comments on the report. To ensure that the role of the CISO is defined in agency policy in accordance with FISMA 2014, we recommend that the Administrator of the U.S. Agency for International Development take the following action: Define the CISO’s role in agency policy for oversight of security for information systems that are operated by contractors on the agency’s behalf. In its comments on a draft of this report, USAID agreed with our recommendation and stated that the Office of the Administrator, in coordination with the Office of the Chief Information Officer, will update operational policy to define the CISO's role for oversight of contractor system security. The agency’s comments are reprinted in appendix XIII. In addition to the individual named above, Nick Marinos (assistant director), William Cook (analyst in charge), Quintin Dorsey, Wayne Emilien, Paris Hawkins, Wil Holloway, Alan MacMullin, Lee McCracken, David Plocher, Kelly Rubin, Edward Varty, Brian Vasquez, and Adam Vodraska made significant contributions to this report.
Federal agencies face an ever-increasing array of cyber threats to their information systems and information. To address these threats, FISMA 2014 requires agencies to designate a CISO—a key position in agency efforts to manage information security risks. GAO was asked to review current CISO authorities. This report identifies (1) the key responsibilities of federal CISOs established by federal law and guidance and the extent to which federal agencies have defined the role of the CISO in accordance with law and guidance and (2) key challenges of federal CISOs in fulfilling their responsibilities. GAO reviewed agency security policies, administered a survey to 24 CISOs, interviewed current CISOs, and spoke with officials from OMB. Under the Federal Information Security Modernization Act of 2014 (FISMA 2014), the agency chief information security officer (CISO) has the responsibility to ensure that the agency is meeting the requirements of the law, including developing, documenting, and implementing the agency-wide information security program. However, 13 of the 24 agencies GAO reviewed had not fully defined the role of their CISO in accordance with these requirements. For example, these agencies did not always identify a role for the CISO in ensuring that security controls are periodically tested; procedures are in place for detecting, reporting, and responding to security incidents; or contingency plans and procedures for agency information systems are in place. Thus, CISOs' ability to effectively oversee these agencies' information security activities can be limited. The 24 CISOs GAO surveyed identified challenges that limited their authority to carry out their responsibilities to oversee information security activities. These challenges can impact agencies' ability to effectively manage information security risk. The table below shows the factors that CISOs reported as being the most challenging to their authority. The 24 CISOs also reported that other factors posed challenges to their abilities to carry out their responsibilities effectively, including difficulties related to having sufficient staff; recruiting, hiring, and retaining security personnel; ensuring that security personnel have appropriate expertise and skills; and a lack of sufficient financial resources. Several government-wide activities are under way to address many of these challenges. However, while the Office of Management and Budget (OMB) has a statutory responsibility under FISMA 2014 to provide guidance on information security in federal agencies, it has not issued such guidance addressing how agencies should ensure that officials carry out their responsibilities and personnel are held accountable for complying with the agency-wide information security program. As a result, agencies lack clarity on how to ensure that their CISOs have adequate authority to effectively carry out their duties in the face of numerous challenges. GAO is making 33 recommendations to 13 agencies to fully define the role of their CISOs in accordance with FISMA 2014. Twelve of the 13 agencies concurred with the recommendations addressed to them. One agency partially concurred or did not concur with the recommendations directed to it. GAO continues to believe that these recommendations are valid and should be implemented as discussed in this report. GAO also recommends that OMB issue guidance for clarifying CISOs' roles in light of identified challenges. OMB partially concurred with the recommendation. GAO maintains that action is needed as discussed further in the report.
HUD is the principal government agency responsible for programs dealing with housing, community development, and fair housing opportunities. HUD’s missions include making housing affordable through FHA’s mortgage insurance for multifamily housing and the provision of rental assistance for about 4.5 million lower-income residents, helping revitalize over 4,000 localities through community development programs, and encouraging homeownership by providing mortgage insurance. HUD is one of the nation’s largest financial institutions, responsible for managing more than $426 billion in mortgage insurance and $497 billion, in guarantees of mortgage-backed securities, as of September 30, 1996. The agency’s budget authority for fiscal year 1998 is about $24 billion. HUD’s major program areas are Housing, which includes FHA insurance and project-based rental assistance programs; Community Planning and Development (CPD), which includes programs for Community Development Block Grants, empowerment zones/enterprise communities, and assistance for the homeless; Public and Indian Housing (PIH), which provides funds to help operate and modernize public housing and administers tenant-based rental assistance programs; and Fair Housing and Equal Opportunity (FHEO), which is responsible for investigating complaints and ensuring compliance with fair housing laws. HUD has been the subject of sustained criticism for weaknesses in its management and oversight abilities, which has made it vulnerable to fraud, waste, abuse, and mismanagement. In 1994, we designated HUD as a high-risk area because of four long-standing Department-wide management deficiencies: weak internal controls, inadequate information and financial management systems, an ineffective organizational structure, and an insufficient mix of staff with the proper skills. In February 1997, we reported that HUD had formulated approaches and initiated actions to address these deficiencies but that its efforts were far from reaching fruition. HUD began a number of reform and downsizing efforts prior to the 2020 plan. In February 1993, then-Secretary Cisneros initiated a “reinvention” process in which task forces were established to review and refocus HUD’s mission and identify improvements in the delivery of program services. HUD also took measures in response to the National Performance Review’s September 1993 report, which recommended that HUD eliminate its regional offices, realign and consolidate its field office structure, and reduce its field workforce by 1,500 by the close of fiscal year 1999. Following a July 1994 report by the National Academy of Public Administration that criticized HUD’s performance and capabilities, Secretary Cisneros issued a reinvention proposal in December 1994 that called for major reforms, including a consolidation and streamlining of HUD’s programs coupled with a reduction in staff to about 7,500 by the year 2000. Secretary Cuomo initiated the 2020 planning process in early 1997 to address, among other things, HUD’s needs for downsizing and correcting management deficiencies. The process included, for each major program area, (1) management reform teams that outlined each area’s business and organizational structure, proposed functional changes, identified resource requirements, and allocated staff based on downsizing targets; (2) “change agent” teams that recommended consolidations and other process changes while meeting downsizing targets; and (3) review of these teams’ reports by the Secretary and principal staff. Members of the management reform and change agent teams were drawn from all levels of the agency. The plan has continued to evolve since June 1997, as implementation teams proceed with their work. HUD’s principal documents supporting the 2020 plan are management reform and change agent reports covering each of the agency’s major program areas and functions. Prepared in the spring of 1997, these reports identify a number of potential efficiencies from consolidating and centralizing processes. Beyond allowing the agency to operate with a reduced workforce, other efficiencies include reducing the processing time for single-family housing insurance endorsements and multifamily housing development applications and reducing paperwork requirements for grant programs. The potential efficiencies are generally not based on detailed empirical analyses or studies, but rather on a variety of factors, including some workload data, limited results of a pilot project, identified best practices in HUD field offices, benchmarks from other organizations, and managers’ and staff’s experiences and judgment. In addition to increased efficiency, HUD expects the planned consolidation of functions and other process changes to result in increased effectiveness. For example, fewer public housing authorities and FHA multifamily projects may become “troubled” because staff can better focus on monitoring and improving the performance of the authorities and projects that are potentially troubled. The following sections discuss, for each of HUD’s major program areas—Housing, Community Planning and Development, Public and Indian Housing, and Fair Housing and Equal Opportunity—the specific process changes proposed in the 2020 plan, the potential efficiencies and other benefits expected from the changes, and the studies or other information HUD provided as support for the changes. HUD’s 2020 plan calls for significant organizational and process changes in three primary functions of FHA’s—single-family housing activities, multifamily housing activities, and the FHA Comptroller’s activities. As discussed below, the nature and detail of the studies and analyses supporting the process changes vary among the offices. Process changes proposed for single-family housing include consolidating functions, such as insurance endorsements, that were previously carried out in 81 field offices into four homeownership centers; privatizing or contracting out most property disposition activities (HUD has to dispose of FHA-insured single-family properties that it owns as a result of lenders’ foreclosures on defaulted mortgages); and eliminating most loan-servicing functions by selling the inventory of HUD-held mortgages. HUD expects the reforms to permit a significant reduction in staffing requirements, reduce insurance endorsement processing time to as little as 1 day (compared with an average of about 2 weeks), improve underwriting and loss mitigation, and increase loans to targeted populations through outreach. HUD also expects the reforms to address problems such as poor control and monitoring of HUD-owned properties and inconsistent delivery of quality services. According to the Deputy Assistant Secretary for Single Family Housing, an in-house team of senior managers developed the homeownership center concept based upon the regional office structure of the Federal National Mortgage Association (Fannie Mae). Fannie Mae serves the entire United States through offices in Atlanta, Georgia; Chicago, Illinois; Dallas, Texas; Pasadena, California; and Philadelphia, Pennsylvania. Certain functions performed by FHA generally parallel some of those performed by other organizations in the single-family mortgage industry such as Fannie Mae. In 1994, as a pilot project, FHA began consolidating its single-family loan-processing operations that were performed in 17 of its field offices into the Denver Homeownership Center. According to HUD, the pilot showed that consolidating work at one site and increasing the use of technology could reduce insurance endorsement processing time from 2 weeks to as little as 1 day. In addition, according to the change agent report, the functions in the Denver Homeownership Center were carried out with half the staff who were responsible for the functions in the 17 field offices. Process changes in FHA’s multifamily housing activities include consolidating the asset development and management functions into 18 hubs supported by staff in 33 program centers; implementing a fast-track loan development process, which allows field offices to waive certain loan-processing requirements and tailor processing options to local needs and requires lenders to order and pay for the appraisals and inspections; and consolidating financial and physical assessments of properties, enforcement, and rental assistance functions—along with similar functions in other program areas—into three nationwide centers. (The three are the Assessment Center, the Enforcement Center, and the Section 8 Financial Management Center.) Efficiencies projected from the changes, according to HUD, include (1) reducing the processing time for housing development applications from 360 days to 35 days and, (2) using nonfederal experience as a model, reducing individual asset managers’ average workloads from 55 projects to 35 (primarily because some functions such as inspections and enforcement actions will be handled in part by the enforcement and assessment centers). In addition, HUD expects the changes to address problems such as inconsistency in processing loan development applications, in terms of both time and procedures; a failure to hold mortgagees accountable, which puts HUD at greater risk; asset managers overburdened with unrelated responsibilities; the lack of an efficient system to identify, assess, and respond to troubled properties; and an inefficient and burdensome administration system for Section 8 rental assistance. Multifamily housing officials provided some empirical data for the projected efficiencies. For example, support for the reduction in asset managers’ workload included some data on workloads in nonfederal organizations that perform similar functions and HUD’s own workload analysis, which is based on its current inventory of properties. The nonfederal workload ratios varied from 18 to 37 projects per project manager. Multifamily housing officials allocated staffing to the field offices (hubs and centers) based, in part, upon the following ratios: 35 insured projects with subsidies per staff person, 55 insured projects without subsidies per staff person, and 16 projects per staff person for preventing the projects from becoming troubled. A HUD survey of multifamily housing field offices showed reductions in processing time and costs using the fast-track process. Anecdotal responses from 14 offices included comments such as, “The old way took 60 to 90 days, some time longer. Processing at any one stage typically takes 30 to 40 days often much shorter;” “FAST-TRACK cut staff time from 120 hours per case to 40 hours per case;” and “Estimated savings $17,000 to $20,000 per case in contracting costs.” Other factors that influenced the restructuring of multifamily housing offices and functions were the experiences of cross-functional teams (staffed from different offices to assist in the handling of workload problems) and field office staff’s experiences. In accordance with the 2020 plan, the FHA Comptroller has redesigned the title I debt collection process and consolidated operations from three centers into one center (Albany, New York). In addition, the Comptroller plans to transfer routine debt collection to the Treasury Department or, if this does not prove to be feasible, to a private contractor. The process changes are being made to address two major problems: (1) the recovery processes were cumbersome and poorly integrated with other processes, such as insurance premium collection from lenders and claims examination, and (2) the resources invested were not justified by the level of assets recovered. The FHA Comptroller believes that the changes will result in increased debt collection with significantly fewer staff. The changes and benefits identified are based upon a business process redesign effort, including a workforce study, that was completed in January 1997. The process redesign showed that over a 10-year period, debt collection could increase 23 percent using fewer than half the existing number of staff. The process redesign team included a staff-level team; a management and stakeholder steering committee; and a contractor that provided consultant services. Prior to the 2020 plan, CPD consolidated the process of grantee planning and reporting for four formula grant programs and initiated a new automated system for the process. Additional changes proposed by the 2020 plan include using advanced mapping software to aid community planning, converting competitive grants providing assistance for the homeless to formula grants, and aligning resource needs and responsibilities within a new Economic Development and Empowerment Service. The reforms are meant to address problems such as fragmented approaches for solving community concerns, limited resources for managing the over 1,300 competitive grants CPD approves in a year, and limited staffing for local monitoring of programs. From the reforms, CPD expects to (1) continue to reduce paperwork requirements; (2) improve the monitoring and review of grantees by targeting its resources to high-risk projects; and (3) reduce its workload for processing, awarding, and monitoring grant applications and grantees’ activities. CPD did not provide empirical or analytical studies supporting the efficiencies expected from the reforms. CPD officials said, however, that their operations demonstrate the viability of the process changes because many of the changes are already in place and personnel reductions had occurred prior to the 2020 plan. However, the conversion of the competitive grants to formula grants requires legislation, and if this does not occur, some monitoring activities may have to be contracted out. Process changes in PIH include consolidating some of the functions previously performed in 52 public housing field offices into 27 hubs and 16 program centers; centralizing and consolidating enforcement, real estate assessment, and Section 8 payment functions into three nationwide centers along with other program areas; centralizing the management of competitive grants and public housing operating and capital funds into one PIH Grants center; centralizing applications for PIH demolition/disposition, designated housing plans, and homeownership plans into one Special Applications center; centralizing functions to improve the performance of troubled public housing authorities into two Troubled Agency Recovery centers; and deregulating (reducing monitoring and reporting requirements for) small and high-performing public housing authorities. HUD envisions that the consolidation of the field offices will even out the public housing authority workload across offices, while the specialization of functions will result in less time and fewer staff needed to carry out the functions. The reforms are meant to address problems such as a lack of monitoring and coordination of PIH programs, staffing imbalances among PIH field offices, and difficulty identifying and resolving problems with housing authorities earlier because of the intensive field resources needed to deal with troubled authorities. PIH did not provide empirical data or analyses that show how the changes will produce the expected efficiencies. As discussed further in this report, PIH used workload and staffing data to redistribute the workload across its field offices. Other support for the changes, according to PIH officials, are on the basis of managers’ and staff’s past experiences. Process changes in FHEO include consolidating its existing field structure of 48 offices into 10 hubs, 9 project centers, and 23 program offices; consolidating, within both its headquarters and field offices, program compliance monitoring and enforcement functions; and cross-training field staff. HUD intends the changes to result in more flexibility to shift resources to meet priorities or handle workload demands; improved communication and cooperation among FHEO staff; an organizational structure that will be clearer to the public; and better integration of fair housing into HUD’s other programs. The changes address problems such as fragmentation of responsibility and accountability in areas such as policy development, planning, and program evaluation; duplication of field oversight functions; and a split in field management between enforcement and program compliance functions, resulting in a “two FHEO” phenomenon. FHEO did not perform analytical studies to support the changes. Rather, the reforms and benefits identified were based on the FHEO’s self-analysis, brainstorming sessions, the findings of a change agent team, a review of workload data, and discussions with employees and customers. According to the Deputy Secretary, the process changes proposed by the 2020 plan, along with partnerships with states and local entities and the use of contractors, will allow the agency to operate with 7,500 staff—a staffing target level established prior to the plan. Proposed staffing levels for each program area, as outlined in the management reform team and change agent team reports, are generally not based upon systematic workload analyses to determine needs. While the teams were instructed by the Deputy Secretary to determine staffing requirements on the basis of workload, they were also instructed to work within targeted staffing levels and HUD’s staffing constraints. The teams relied on a variety of factors, including workload data, to show whether they could carry out their responsibilities within assigned targeted staffing levels. The 2020 plan proposes a staffing target of 2,900 for the Office of Housing, a reduction of about 44 percent from fiscal year 1996 staffing of 5,157. The 2,900 figure includes some positions that will be transferred to the Department-wide Assessment, Enforcement, and Section 8 Financial Management centers; the exact numbers are still evolving as implementation plans are developed for the three centers. The following sections discuss some of the factors considered in assessing the Housing Office’s staffing needs. FHA’s proposal to carry out single-family housing activities with the reduced staffing level of 764 (as of January 1998) stems primarily from the elimination of most loan servicing and property disposition activities. According to the Deputy Assistant Secretary for Single Family Housing, the proposed staffing level is based on past experience, input from the change agent team and the managers of the 2020 reorganization project, and staffing levels at the Denver Homeownership Center pilot. Staffing for the Title I Asset Recovery Center, part of the FHA Comptroller’s office, was based in part on a workload analysis performed as part of the business processing reengineering project. The workload analysis showed a need for a staffing level of 62. This number was reduced to 50, according to FHA officials, after (1) discussions with Department of Treasury officials who, based on their experience with debt collection activities, believed the operations could be performed more efficiently and (2) higher level reviews, which concluded that further reductions were needed. When assessing multifamily housing staffing needs, FHA considered factors such as job functions, types of housing projects (subsidy or nonsubsidy, troubled or nontroubled), supervisor/staff ratios recommended by the National Performance Review, and nonfederal workloads for asset managers. As part of its assessment, FHA assumed that it will reduce troubled projects to 10 percent of the inventory (from an estimated 20 percent currently) by year 2000. The 2020 plan proposes a staffing target of 770 for Community Planning and Development, a reduction of 8.8 percent from fiscal year 1996 staffing of 844. However, the CPD management reform plan states that an additional 200 personnel may be needed to fully implement its grants management system and undertake adequately staffed on-site monitoring for high-risk projects. This staffing level need is based, according to a CPD official, on staffing and workload data from 1992 and 1996. According to the official, the analysis used a formula that takes into consideration the number of grants, dollar amount of grants, and staffing levels and compared workloads for the 2 years. CPD was unable to provide documentation of the detailed analysis. For Public and Indian Housing, the 2020 plan proposes a staffing target of 1,165, a reduction of 14 percent from fiscal year 1996 staffing of 1,355. After receiving its staffing target, PIH first identified the needs of the processing and operations centers. It then allocated the remaining staff to field office sites using a formula that incorporated the number of public housing authorities with 250 or more low-income housing units and/or 500 or more Section 8 rental assistance units within each office’s jurisdiction. The 2020 plan proposes a staffing target of 591 for Fair Housing and Equal Opportunity, a reduction of about 11 percent from fiscal year 1996 staffing of 663. Of the 591 staff, 475 will be in field offices. In 1996, FHEO reviewed field office workload data and estimated that it needed from about 150 to about 250 more staff than the 474 then on board. However, officials told us that the Office’s legislatively established missions can be accomplished with the allotted personnel level. In its latest semiannual report, HUD’s Inspector General raised concerns about the 2020 plan, including the agency’s capacity to implement the reforms. The report noted that the downsizing target of 7,500 was adopted without first performing a detailed analysis of HUD’s mission and projected workload under its proposed reforms. The report also noted that although HUD is downsizing, implementation plans are not final, and the proposed legislation to streamline and consolidate programs has not been enacted. In commenting on a draft of this report, HUD’s Acting Deputy Secretary stated that the Department plans to achieve its downsizing goal of 7,500 full-time employees by 2002 in two phases. During the first phase, HUD has reduced staff to approximately 9,000 employees who are being deployed to enhance the delivery of HUD’s programs and services. According to the Acting Deputy Secretary, HUD now plans to continue downsizing to 7,500 by 2002—the second phase—only if (1) the Congress enacts legislation to consolidate HUD’s program structure and (2) there has been a substantial reduction in the number of troubled multifamily assisted properties and troubled public housing authorities. On August 10, 1997, HUD and the American Federation of Government Employees National Council of HUD Locals 222 signed an implementation agreement to carry out the 2020 plan. The agreement, among other things, stated that buyouts, attrition, and aggressive outplacement services would be used in lieu of reductions in force through year 2002. The agreement identified two types of positions that would be filled to implement the reforms: substantially similar positions (those that entail similar duties, critical elements, and qualification requirements and can be performed by the incumbent with little loss in productivity) and new positions. The procedures outlined in the agreement to fill substantially similar positions are as follows: Reassignments to similar positions will be in the local commuting area. Positions not filled by reassignments will be filled by merit selection. Any positions still vacant will be filled by management’s directed reassignment of an employee. (Because of employees’ concerns, HUD has decided not to use this procedure.) Any position still vacant will be filled by outside hires. The procedures outlined in the agreement to fill new positions are as follows: For HUD’s new consolidated centers, positions will be filled using merit selection procedures. Except for positions that require special skills—for example, HUD attorneys and some Community Builders—merit staffing will be restricted to HUD employees. Any positions still vacant will be filled by management’s directed reassignments. (Because of employees’ concerns, HUD has decided not to use this procedure.) Any positions still vacant will be filled by outside hires. HUD initiated personnel actions to implement the 2020 reforms in September 1997. A buyout was held that closed September 30, 1997, in which 771 employees were approved to leave the agency. In October 1997, HUD mailed letters to each of its employees regarding their status under the reforms. HUD sent letters to 3,024 employees notifying them that their jobs were unaffected by the reforms. HUD sent letters to 3,184 employees notifying them that they would be voluntarily reassigned to substantially similar positions within the same geographical area. HUD sent letters to approximately 3,000 employees notifying them that they had not been placed in a position in HUD’s new organization. The letters also stated that they would remain in their current position if they did not obtain a position through merit staffing, or voluntary reassignment, or a career outside of HUD. The letter stated that HUD would not implement a reduction in force until 2002 if one was necessary. On October 16, 1997, according to HUD, it announced 1,676 merit staffing vacancies. The announcements closed November 3, 1997. In November, an Office of Personnel Management team reviewed HUD’s merit staffing guidance for filling these vacancies and made several suggestions for revising the language in the guidance. Also, in November, HUD announced a second buyout that employees had to take advantage of by December 23, 1997. An additional 230 employees were approved to leave the agency under the buyout. In January 1998, HUD announced additional voluntary reassignments for positions that remained unfilled. Any positions still vacant after the voluntary reassignments will be advertised for outside hires. The HUD 2020 Management Reform Plan is the latest in a series of recent proposals to overhaul a department that has been long-criticized for its management weaknesses—including those that contributed to our designation of HUD as a high-risk area. The plan is directed, in part, towards correcting the management deficiencies that we and others, including the Inspector General and the National Academy of Public Administration, have identified. The plan also incorporates steps for simultaneously reducing the agency’s workforce. The 2020 plan is still evolving. Because the reforms are not yet complete and some of the plan’s approaches are untested, the extent to which its proposed reforms will result in the plan’s intended benefits is unknown. In addition, because the downsizing target of 7,500 staff is not based upon a systematic workload analysis to determine needs, it is uncertain whether HUD will have the capacity to carry out its responsibilities once the reforms are in place. Furthermore, the plan references legislative proposals, some of which, if not enacted, could affect workloads and staffing needs. Moreover, the process changes and downsizing suggest a greater reliance on contractors to help carry out HUD’s mission. These uncertainties heighten the need for HUD, as it moves forward with implementing the 2020 plan’s reforms, to carefully monitor its performance, assess the impact of the reforms, and amend the plan if necessary—including its staffing targets. Consulting with the Congress, its customers, and other stakeholders through a mechanism such as the Government Performance and Results Act could enhance the success of these efforts. HUD provided comments on a draft of this report (see app. I). HUD said that the report did not consider the agency’s need for management reform and whether the plan focuses on the right areas. HUD also said that (1) due to its focus on the role of empirical analysis, the draft report did not adequately acknowledge other methods used to develop specific management reforms, (2) the draft report did not reflect that HUD undertook substantial workload analyses to plan for reaching the goal of 7,500 employees, and (3) the draft report failed to discuss any of the benefits likely to emerge from the plan’s systemic changes. In its comments, HUD also included information on the 2020 plan’s implementation status and how certain of its specific reforms are expected to address problems identified by its Inspector General, GAO, and others. Our draft report did not specifically assess HUD’s need for management reform and whether the plan focuses on the right areas because they were outside the scope of our objectives. However, the report contains background information on the agency’s history of management problems and its reform and downsizing efforts prior to the 2020 plan. We agree that there was a need for HUD to take action and that some actions included in the 2020 plan may help to correct deficiencies that we and others have identified. The 2020 plan seeks to solve many of the critical problems facing the Department. HUD’s recognition that it needs to establish Department-wide capacities for real estate assessment and enforcement activities; improve internal controls; and improve systems and staffing for monitoring funds and multifamily project and public housing authority activities is consistent with the long-standing concerns that we and others have had. In this regard, our report was not intended to fault HUD’s attempts to correct these deficiencies, and we have made changes where appropriate to reflect a proper tone. Regarding HUD’s comment about a focus on empirical analysis, two of our three objectives concerned the studies and analyses underlying (1) the efficiencies derived from centralizing and consolidating certain programs and activities and (2) the Department’s ability to carry out its responsibilities with the plan’s target staffing level of 7,500. By their nature, these questions encompass the role of empirical analysis. The draft report did acknowledge the role played by other factors—including the change agent and management reform teams, the experience of HUD managers and staff, the practices of other organizations, and the experience of the Denver Homeownership Center pilot project—in setting out the efficiencies HUD expects from centralizing and consolidating certain activities. In its comments, HUD said that, in addition to the factors cited in our draft report, it consulted with recognized management experts prior to the June 1997 release of the 2020 plan; consulted with affected constituent groups and the Congress since the plan’s release; and incorporated the Inspector General’s suggestions into its implementation plans. We agree that such steps may be useful in building support for HUD’s reforms. However, as noted, our objectives were to provide information on HUD’s analytical support for the efficiencies it expects from the reforms—that is, the extent of data supporting the anticipated quantitative and qualitative benefits stated in the 2020 plan. HUD said that it undertook substantial workload analyses to plan for reaching the goal of 7,500 employees and that the workload analyses—along with the reengineering of numerous processes—formed the foundation for staffing size and allocation decisions. As we noted in our draft report, HUD’s management reform and change agent teams relied on a variety of factors, including workload data, to show whether each program area could carry out its responsibilities within assigned targeted staffing levels. However, we draw a distinction between (1) analysis that is directed at determining how many staff are needed to carry out a given responsibility or function and (2) the use of historical workload data to apportion, or allocate, a predetermined target number of staff among different locations or functions. While HUD clearly used the latter approach, at least within some program areas, it provided us with no evidence during our review or in its comments that it used the former.Rather, as our report states, the management reform and change agent teams were instructed by the Deputy Secretary to work within targeted staffing levels; the predetermined target level for the entire Department was 7,500, a number established prior to the 2020 planning process. As is also noted in our report, HUD’s Inspector General reported in December 1997 that the downsizing target of 7,500 was adopted without first performing a detailed analysis of HUD’s mission and projected workload under its proposed reforms. We have revised the language in our report where appropriate to make this distinction clear. We also added information that HUD provided in its comments concerning future downsizing to the 7,500 level from the current level of about 9,000. Concerning HUD’s comment that the draft report did not acknowledge potential benefits from the 2020 reform plan, the report noted that the plan is directed in part towards correcting management deficiencies that we and others have identified. Furthermore, the report noted that, in addition to increased efficiency, HUD expects the planned consolidation of functions and other process changes to result in increased effectiveness, such as fewer troubled public housing agencies and troubled FHA multifamily projects. For the reasons stated in the report, we continue to believe that the extent to which these benefits will be realized is as yet uncertain. HUD implicitly acknowledges this uncertainty in its comments by conditioning its further downsizing in part on a “substantial” reduction in troubled public housing agencies and multifamily projects. To identify HUD’s analyses supporting the (1) prospective efficiencies from centralizing and consolidating major programs and activities and (2) agency’s ability to carry out its responsibilities with 7,500 employees, we reviewed the management reform and change agent reports for each of HUD’s major program areas. We also interviewed officials in each program area who had participated in, or were familiar with, the process of developing the 2020 plan. We asked officials in each program area to provide any empirical studies or analyses underlying the proposed reforms that did not appear in the management reform or change agent reports. In addition, we spoke with officials in HUD’s Office of the Assistant Secretary for Administration and obtained the Inspector General’s report on the 2020 planning process. To identify how HUD plans to manage the personnel changes that will result from the reforms and downsizing, we interviewed officials responsible for the changes and obtained copies of union agreements and other relevant documents. We performed our work from September 1997 through February 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; the Secretary of Housing and Urban Development; and the Director, Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions, please call me on (202) 512-7631. Major contributors to this report are listed in appendix II. Results Act: Observations on the Department of Housing and Urban Development’s Draft Strategic Plan (GAO/RCED-97-224R, Aug. 8, 1997). High-Risk Series: Department of Housing and Urban Development (GAO/HR-97-12, Feb. 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed aspects of the management reform proposals outlined in the Department of Housing and Urban Development's (HUD) 2020 Management Reform Plan, focusing on: (1) studies and analyses that HUD performed to determine the efficiencies derived from the centralization and consolidation of the Federal Housing Administration (FHA) and other major programs and activities; (2) studies and workload analyses that were conducted to show that HUD would be able to carry out its responsibilities with 7,500 employees; and (3) HUD's plan to manage the personnel changes that will result from its reforms and downsizing. GAO noted that: (1) reports covering each of HUD's major program areas and functions, prepared by teams of HUD employees in the spring of 1997, are the principal documents supporting the 2020 plan; (2) the reports identify a number of prospective efficiencies from consolidating and centralizing certain processes; (3) in addition to allowing the agency to operate with a reduced workforce, HUD intends the changes to reduce the time or paperwork required for various processes; (4) the efficiencies cited are generally not based upon detailed empirical analyses or studies, but rather on a variety of information, including some workload data, limited results from a pilot project, identified best practices in HUD field offices, benchmarks from other organizations, and managers' and staff's experiences and judgment; (5) the plan is directed in part towards correcting the management deficiencies that have been identified; (6) because the reforms are not yet complete and some of the plan's approaches are untested, the extent to which they will result in the intended benefits is unknown; (7) according to HUD's Deputy Secretary, the process changes proposed by the 2020 plan, along with states and local entities and the use of contractors, will allow the agency to operate with 7,500 staff--a staffing target level established prior to the plan; (8) however, proposed staffing levels for each program area are generally not based upon systematic workload analyses to determine needs; (9) while the reform teams were instructed by the Deputy Secretary to determine staffing requirements based upon workload, they were also instructed to work within targeted staffing levels and the Department's staffing constraints; (10) the reform teams relied on a variety of factors, including some workload data, to show whether responsibilities could be carried out within targeted staffing levels; (11) because the downsizing target of 7,500 staff is not based upon a systematic assessment of needs and because proposed legislation could affect those needs, it is uncertain that HUD will have the capacity to carry out its responsibilities once the reforms are in place; (12) an August 1997 agreement between HUD and the American Federation of Government Employees National Council of HUD Locals 222 established the framework for managing personnel changes to implement the 2020 plan; (13) this agreement includes buyouts, reassignments, and an outplacement program for HUD employees and provides that a reduction in force may be used if necessary, but not before 2002; and (14) this agreement also provides for hiring new employees for some positions.
Internet banking is one form of on-line banking; PC direct dial banking is another. Before Internet banking, customers using direct-dial PC banking needed to use specialized computer software provided and supported by their depository institution. More recently, these direct-dial connections are being replaced by Internet connections over which customers can use their computers and browser software to connect to their depository institution’s Web site. In general, regulators distinguish three types of Internet banking Web sites: Purely informational sites, which have information about the depository institution and its products and services but no interactive capability; Information-exchange sites, which provide information and allow customers to send information to the depository institution or make inquiries about their accounts; and Fully transactional sites, which offer the previously described capabilities as well as some additional services, such as real-time account queries, transfers of funds among accounts, bill payments, or other banking services. Internet banking services are offered by a rapidly growing number of depository institutions. According to recent data, at least 3,610 federally insured depository institutions—about 17 percent of all U.S. banks, savings associations, and credit unions—offered some form of Internet banking service as of February 1999. About 20 percent of these depository institutions offered fully transactional Web sites. Information available from the banking regulators and industry studies suggest that Internet banking is accelerating. According to FDIC and NCUA statistics, in the 11 months ending February 1999, the number of banks, thrifts, and credit unions with transactional sites almost tripled. According to projections reported by the Department of Commerce, the number of customers who went on-line to perform banking transactions increased by 22 percent, from 4.6 million to 5.6 million, in the 6 months ending April 1998. Five federal regulators—FDIC, FRS, NCUA, OCC, and OTS—supervise and examine all federally insured depository institutions. FDIC, a government corporation, is the primary federal regulator of state-chartered banks that are not members of FRS. FRS, another independent body, shares responsibility with state banking regulators for supervising and examining state-chartered banks that are members of FRS. In addition, FRS supervises bank holding companies and their nonbank subsidiaries. Banks under FRS’ supervision are supervised by 12 regional Reserve Banks that conduct examinations under delegated authority from the Board of Governors in Washington. NCUA is an independent body responsible for examining and supervising federally insured credit unions and works with state regulators to monitor the safety and soundness of state-chartered credit unions. OCC, an agency, that is a bureau of the Department of the Treasury, supervises all national banks. OTS, which is also a bureau of the Department of the Treasury, serves as the primary regulator for thrifts and thrift holding companies. The regulators oversee a mix of large, medium, and small depository institutions, as shown in table 1. Banking regulators also work together through FFIEC, an interagency forum Congress created in 1979 to promote consistency in the examination and supervision of depository institutions. In 1996, FFIEC updated its “Information Systems Handbook,” which provides regulators with general guidance on information systems and technology examinations. To help ensure the safety and soundness of federally insured banks, thrifts, and credit unions, banking regulators conduct various types of monitoring activities. They include the following: Off-site monitoring, which generally consists of reviews and analyses of depository institution-submitted data, including call reports, and discussions with bank management, is carried out to monitor compliance with requirements or enforcement actions; formulate supervisory strategies, especially plans for on-site examinations; and identify trends, areas of concern, and accounting questions. On-site safety-and-soundness examinations are conducted to assess the safety and soundness of a depository institution’s practices and operations. Specific objectives of these on-site examinations that are common to all the banking regulators include (1) determining the institution’s condition and the risks associated with its current and planned activities; (2) evaluating the institution’s overall integrity and the effectiveness of its risk management by testing the institution’s practices; and (3) determining the institution’s compliance with laws, regulations, and rulings. Information systems examinations are conducted to identify and correct information and technology-related risk exposures of significance that threaten the depository institution. These examinations focus on various components of an institution’s information system, such as the capabilities of its information technology management; the adequacy of its systems development and programming; and the quality, reliability, availability, and integrity of its information technology operations. Finally, special technical examinations of banking services by third parties are conducted to ensure that banking operations performed by third-party firms are consistent with the safety and soundness of the depository institutions using the services. These examinations, which often include a review of the management systems, operations, and financial condition of the service providers, can provide regulators with greater assurances of the reliability of services than can be obtained during normal safety and soundness examinations of a depository institution. The banking regulators also conduct reviews of on-line banking systems for compliance with consumer protection laws and regulations. These include examinations of an institution’s obligation to provide required notices and disclosures on Internet banking products and services. To address our four objectives, we interviewed officials and reviewed available documents from the five banking regulators. This included obtaining information on Internet banking risks and each regulator’s strategy for overseeing Internet banking activities, the methods used to identify depository institutions that offer Internet banking, the existence of safety and soundness and information systems examination procedures for reviewing Internet banking, and the extent of examinations of third- party firms. We did not independently verify the accuracy of data that banking regulators provided. We also interviewed representatives from selected depository institutions and third-party firms to obtain their views on the scope and frequency of examinations by bank regulators and their assessment of risks posed by Internet banking systems. In addition, we developed a data collection instrument to document our review of 81 safety and soundness and information systems examinations that included on-line banking and we also used a structured questionnaire to interview 43 selected examiners who had conducted these on-line banking examinations. (See app. I for a more detailed description of our scope and methodology.) We did our work from April 1998 to May 1999 in Washington, D.C.; Los Angeles, CA; San Francisco, CA; Atlanta, GA; Kansas City, KS; and New York, NY, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the five banking regulators and FFIEC, and these comments are discussed near the end of this letter and are reprinted in appendixes III through VIII. Internet banking services heighten various types of risks that are of concern to banking regulators, and the regulators have advised institutions to mitigate these risks through the implementation of risk management systems that emphasize, among other things, (1) active board of directors’ oversight, (2) effective internal controls, and (3) comprehensive internal audits. Too few examinations that included a review of Internet banking had been conducted at the time of our review for the extent of Internet banking-related problems industrywide to have been identified. However, our review of 81 such examinations revealed that some depository institutions had not always adhered to risk mitigation guidance provided by the regulators. Few examinations had been conducted because, according to the regulators, Internet banking was a relatively new activity, and examination procedures were still being developed. Other reasons reported by regulators were that the number of examiners with expertise in information systems was limited and that some examiners who might otherwise have examined on-line banking during our study period were diverted by higher priority efforts to address the Year 2000 computer problem. As more examinations are completed, sharing of information among the regulators could help them better understand the extent of risks posed by Internet banking, develop risk characteristics allowing them to target institutions requiring further attention, and help make decisions on how best to allocate information technology expertise among competing priorities. Internet banking heightens various types of traditional banking risks that are of concern to banking regulators. These risks, which are discussed in regulatory guidance provided to depository institutions, include the following: Security risk is the risk of potential unauthorized access to a depository institution’s networks, systems, and databases that could compromise internal systems and customer data and result in financial losses. The use of an electronic channel, such as the Internet, to deliver products and services introduces unique risks for a depository institution due to the speed at which systems operate and the broad access in terms of geography, users, applications, databases, and peripheral systems. Transactional risk is the risk of financial losses arising from problems with service or product delivery. Transactional risk often results from deficiencies in computer system design, implementation, or ongoing maintenance. Strategic risk is the risk to earnings or capital arising from adverse business decisions or adverse implementation of those decisions. Depository institutions face strategic risk whenever they introduce a new product or service, such as Internet banking. Reputation risk is the risk of significant negative public opinion that results in a critical loss of funding or customers. This risk can also expose the depository institution to costly litigation. Failure of Internet banking products to perform as promised, such as a communication failure that prevents customers from accessing their accounts, could expose a depository institution to reputation risk. Lastly, compliance risk is the risk arising from violations of, or nonconformance with, laws, rules, regulations, required practices, or ethical standards. This risk may arise if a depository institution fails to comply with regulatory guidance or an enforcement action. Banking regulators have provided depository institutions with advisory guidance on how to mitigate risks posed by Internet banking, including risks related to services provided by third-party firms. In their guidance, regulators describe how depository institutions in general should plan for, manage, and monitor risks associated with the use of technology. Most regulators provided such guidance in advisory letters to all covered depository institutions. FRS provided its guidance in a “sound practices paper” released at a FRS information security conference in September 1997. The guidance was not tailored to fit individual institutions. (See app. II for descriptions of guidance provided by each regulator.) As discussed in these advisory guidance, risk management systems include the following critical components. Active board and senior management oversight: Boards of directors have ultimate responsibility for on-line banking systems, including Internet banking systems, offered by their depository institutions. The guidance points out that the Internet facilitates broad access to confidential or proprietary information, and deficiencies in planning and deployment can significantly increase the risk posed to a depository institution and decrease its ability to respond satisfactorily to problems that arise. For this reason, directors, senior managers, and line officers are to be fully informed of the significant investments, opportunities, and risks involved in deploying such technology. Boards of directors should approve the overall business and technology strategies, and senior management should ensure that adequate risk management systems are in place. Effective internal controls: Internal controls are the means by which the board of directors, management, and other personnel obtain reasonable assurance that an institution’s assets are safeguarded and that its systems and operations are reliable and efficient. Regulators’ guidance describes a variety of internal controls to help mitigate risks involving such areas as systems security, management of third-party firms, and various operating policies and procedures that should be considered to keep pace with new technological developments. Adequate internal audits: Regulators’ guidance points out that an objective review of on-line banking should identify and quantify risk, and detect possible weaknesses in a depository institution’s risk management system as it pertains to on-line banking. When coupled with a strong risk management program, a comprehensive, ongoing audit program allows the institution to protect its interests as well as those of its customers and other participants. While examiners found that some depository institutions were not taking all of the prescribed precautions to mitigate risks, too few examinations with documented on-line banking assessments were available at the time of our review to identify the extent of any industrywide Internet banking- related problems. According to the regulators, few examinations had been conducted because Internet banking is a relatively new activity and regulators have had to develop and implement new policies and procedures and related training programs to assess this activity. In addition, regulatory examinations required to address the higher priority Year 2000 computer problem were contemporaneous with our review, and some regulators reported that limited information systems resources prevented them from conducting both Year 2000 and on-line banking examinations. Between March 1998 and August 1998, we asked each regulator to provide us with information on safety and soundness and information systems examinations in which (1) examiners applied their agency’s on-line banking examination procedures written for both direct-dial and Internet banking systems or (2) where the examination’s scope included on-line banking. It was difficult for most regulators to provide such information because, with the exception of FDIC, information was not maintained centrally to identify examinations that included on-line banking assessments. We reviewed 81 examinations that regulators were able to provide. The 81 examinations included 58 small-, 18 medium-, and 5 large- sized depository institutions. The Internet banking activities examined by the regulators included informational sites, information-exchange sites, and transactional sites. In the examinations we reviewed, examiners noted that the on-line banking risk mitigation systems had various types of weaknesses. None of the examined depository institutions, including those whose risk management systems evidenced weaknesses, were reported to have experienced financial losses or security breaches due to Internet banking activities. However, in the 81 depository institutions examinations we reviewed, regulators found that 36 (44 percent) had not completely implemented the on-line banking risk mitigation steps outlined by the regulator. As summarized in table 2, in 20 of the 81 examinations (25 percent), strategic planning deficiencies were discovered. For example, the regulators found that some institutions had not prepared strategic plans or had not obtained board of directors’ approval before initiating on- line banking. In 26 of the examinations (32 percent), the regulators found that the institution did not have policies and procedures in place to guide its on-line banking operations. In 29 of the examinations (36 percent), the regulators found that the institution lacked adequate audit coverage of its on-line operations. Fifteen examinations (18 percent) disclosed that the institution had not taken steps to evaluate its third-party firm or lacked a written contract with the firm. Examiners whom we interviewed expressed concerns about deficiencies similar to those revealed in the examinations we reviewed. For example, examiners were concerned that some smaller institutions were implementing Internet banking systems before they had established operating policies and procedures and that bank management had to be reminded that operating policies and procedures were not optional. Because the examinations we reviewed did not represent a statistically valid sample, we are unable to project the number of weaknesses beyond the institutions reviewed. However, the extent of problems identified at smaller institutions is consistent with views expressed by some banking industry officials that smaller institutions have the potential to encounter Internet banking-related problems. These officials generally believed that smaller institutions may have insufficient in-house expertise to operate an Internet banking system or lack the ability to adequately evaluate the Internet banking services offered by third-party firms to ensure that such systems operate as intended. In particular, NCUA officials observed that smaller institutions might move too quickly into Internet banking because of the relatively low costs of providing such services through third-party firms and the desire to remain competitive. Banking regulators have told us that depository institutions’ increasing use of information technology—such as that employed in Internet banking— and the growth forecast for Internet banking, present them with human capital management challenges. The adequacy of regulatory efforts to ensure safe and sound operations of complex transactional Internet banking systems will depend increasingly upon the availability of examiners with appropriate expertise or training in information technology management. During our review, banking regulators expressed concern about their ability to address technological changes in the banking industry with their existing resources. Information about depository institutions’ plans to provide Internet banking services could help ensure that regulators are aware of growth and technological trends in Internet banking. This information could be instrumental in enabling regulators to provide individual depository institutions with more timely and specific risk-management guidance and advice before such institutions enter into contracts with third-party firms or independently develop their own Internet banking services. Awareness of an institution’s Internet banking plans could also provide regulators with useful information to plan the scope and timing of future examinations as well as to identify the need for examiners with the appropriate information technology expertise. OTS recently established a requirement that it receive advance notice of an institution’s plans to establish a transactional Web site. OTS and FDIC were the only regulators that captured Internet banking information gathered during examinations, including information about institutions’ plans to offer Internet banking, in a centralized database that could be used in planning examinations and monitoring Internet banking activities. Other methods used by regulators to identify depository institutions that are already offering Internet banking do not allow the regulators the opportunity to evaluate the effectiveness of an institution’s Internet risk mitigation plans or to provide institutions with more timely and specific risk management guidance and advice prior to implementation. OTS regulations, effective January 1999, require thrifts to provide a written notice to OTS before establishing a transactional Web site. The regulations state that the notice must describe the transactional Web site; indicate the date the site will become operational; and list a contact familiar with the deployment, operation, and security of the site. According to OTS officials, the one-time notification requirement will enable the agency to better monitor technological innovations and thus assess emerging security and compliance risks. OTS officials said they believed that this monitoring would also enable the agency to more proactively provide guidance to thrifts as they plan for or begin to conduct Internet operations. At the time of our review, OTS was beginning to develop procedures for providing such guidance. If, after receiving the notice OTS informs the thrift of any concerns, the thrift must follow any procedures that OTS imposes. If the thrift does not receive any comments from OTS, it is free to go on-line 30 days from the filing date of its notice with OTS. Before adoption of the final proposal, OTS recognized that this notice requirement would impose some burden on thrifts. However, it determined that the one-time expenditure by a thrift of an estimated 2 hours to report its plans represented a minimal burden. Before January 1999, the effective date of the reporting requirement, OTS officials told us that OTS identified thrifts’ Internet banking activities primarily during examinations, although some of its regional offices used other means to identify Web sites. For example, the western region periodically had surveyed thrifts, and the Atlanta region used the Internet to identify thrifts’ Web sites. In August 1998, OTS asked for public comment on its advance notice proposal. The agency received nine comments in response—six from thrifts, two from trade associations, and one from a public interest organization. Seven commenters supported the proposal’s overall flexible regulatory approach. Two commenters argued for even greater flexibility and opposed the proposed notification requirement. Four commenters also argued that the notice requirement would place thrifts at a competitive disadvantage, because other banking regulators did not impose a similar requirement. OTS’ response was that it did not anticipate that the notification requirement would place thrifts at a significant competitive disadvantage because, once a thrift has addressed any follow-up questions from OTS’ regional office or the 30-day period has expired, the thrift would be free to operate the transactional Web site. Finally, one commenter questioned whether requiring regulatory notice 30 days prior to installing a transactional site would mitigate the risks mentioned by OTS. The commenter noted that developing a system requires substantial advance planning, possibly across multiple departments, and perhaps a contract with an outside third-party firm. Thus, at the time of notice, according to the commenter, the work essentially would be completed, and the financial costs of development already would have been absorbed by the institution. The commenter pointed out that, for this reason, an advance notice after the financial risk had been assumed would not substantially protect the institution. OTS’ response was that it encourages thrifts concerned with such expenditures of resources to consult their regional office in the early stages of development, even before filing a notice. Currently FDIC and OTS are the only regulators that maintain a centralized database on Internet banking information gathered during banking examinations. In regards to FDIC, if an examiner identifies an institution that plans to offer Internet banking, this information is to be entered into the centralized system along with other on-line banking data collected. In addition to data on institutions offering or planning to offer Internet banking, this database includes information on third-party firms supplying Internet banking services. According to FDIC officials, information captured in the centralized system facilitates the creation of uniform records of all examined institutions with on-line banking and avoids capturing redundant information across FDIC’s eight regions. They said that the system also provides an improved means across separate regional systems for headquarters’ staff and examiners to understand how electronic banking is changing and to more effectively plan the scope, timing, and staffing of future examinations. As of April 1, 1999, the FDIC centralized system included information from 391 on-line banking examinations. OTS began collecting information centrally in November 1998. OTS officials told us that their centralized database includes on-line banking information from all examined thrifts. In addition, the database includes the Web site address of over 400 thrifts that reported this information on their quarterly filings as well as information gathered as part of OTS’ advanced notification requirement. Regulators use a variety of other methods to identify depository institutions that are already offering Internet banking services. All of the regulators said that they gathered information on institutions’ Internet banking services during pre-examination planning activities. The regulators also said that they periodically searched the Internet for Internet banking Web sites. In March 1998, NCUA began requiring credit unions to report their electronic mail addresses and the type of Web site offered on their periodic financial and statistical reports. In addition, at the close of our review, FRS said it was beginning to centrally collect examination and survey information on the types of Internet banking services being offered by its regulated entities (e.g., account balance inquiries, bill payment, and loan application) as well as the names of third- party firms and software vendors. OCC plans to centrally collect similar information on institutions that are already providing Internet banking services. However, such “after-the-fact” methods do not give the regulators the opportunity to provide individual institutions with more timely and specific risk mitigation guidance and advice before they go on-line, and these methods do not give regulators the opportunity to evaluate an institution’s risk mitigation plans before an institution’s Internet banking services are operational. With the exception of NCUA, the regulators were developing, testing, or implementing on-line banking examination procedures, which included those for examinations of Internet banking. NCUA said that it had not established procedures for Internet banking examinations or conducted Internet banking examinations because of the need to conduct Year 2000 reviews. In addition, we found that regulators’ examination programs used differing methods in conducting and staffing Internet banking examinations. For example, because Internet banking is a new and evolving activity, FDIC and OTS required their examiners to thoroughly examine an institution’s Internet banking activities during the first examination after those activities were implemented, while FRS and OCC did not. We also found variations in the level of expertise and training required of examiners who reviewed Internet banking systems. The regulators have shared information on issues of common concern to them in the past but have not routinely shared information on Internet banking risks and examination results. As each regulator gains experience in applying their examination methods and procedures, it would be useful for the regulators to share their expertise to help determine which methods and procedures are the most efficient and effective. Each of the regulators had implemented similar examination policies that reflected the regulators’ overall risk-based approach to supervision. These policies required examiners to determine how various existing or emerging issues facing an institution or the banking industry affected the nature and extent of risks at particular institutions. Based on a risk evaluation, examiners are expected to develop supervisory plans and actions that would direct their resources to the issues presenting the greatest risks, especially those risks that present material, actual, or potential risks to the banking system. While the banking regulators’ examination policies were established, their procedures for examining on-line banking activities were in differing stages of development. Generally, FDIC, FRS, OCC, and OTS had already implemented or were testing examination procedures for conducting on- line banking examinations. FDIC and OTS had both issued final examination procedures and were using the procedures to conduct examinations that included Internet banking activities. FDIC was the first to implement an on-line banking examination program in 1997 and had identified more examinations for our review than any other banking regulator. In commenting on a draft of this report, FDIC said that it had also developed three technical work programs that it is field-testing and has shared with the other regulators. In addition, FDIC said that it had increased the number of information systems examiners. OTS was the next regulator to issue final examination procedures. FRS and OCC were still developing their on-line banking examination programs and were field testing their examination procedures at the close of our review. At the time of our review, NCUA had not established procedures for Internet banking examinations or conducted such examinations. The primary reasons for this, according to NCUA officials, were that the agency did not have the necessary expertise to develop Internet banking procedures and that its examination resources were dedicated to examinations geared to averting the Year 2000 computer problems. According to NCUA, as work related to the Year 2000 computer problem diminishes, the agency is beginning to focus attention on Internet banking activities. NCUA first began to consider the need for Internet banking examinations in 1997, when it informally distributed a white paper on “cyber credit union services.” This paper was distributed to NCUA examiners who had attended a specific training course and was also provided to each regional director, who had the option of making the paper more widely available to regional staff. NCUA officials told us the agency now expects to develop new Internet examination procedures that will be closely aligned to FFIEC’s guidance on supervisory oversight of information systems, but no time frames have been established for developing or implementing these procedures. In 1998, NCUA filled three new information systems officer positions. While these individuals have been primarily devoted to the Year 2000 project, agency officials told us that these individuals will begin to develop Internet banking examination procedures and train agency examiners. While FDIC, FRS, OCC, and OTS on-line banking examination policies were similar, their approaches to examining an institution’s on-line banking activity varied. For example, because Internet banking is a new banking activity that can potentially introduce new risks to an institution, FDIC and OTS expect their examiners to thoroughly examine an institution’s Internet banking activities during the first examination after those activities are implemented. In contrast, FRS and OCC do not require that an institution’s new Internet banking activity be thoroughly examined. Instead, these regulators permit safety and soundness or information systems examiners to exercise discretion in determining the relative risk and the need for and scope of their examinations of new banking activities, including the establishment of Internet banking services. In this regard, examiners may decide not to devote further resources to examining Internet banking if they determine after an initial assessment that Internet banking is a small segment of an institution’s overall business, posing little risk to the safety and soundness of the institution. We also found differences in the type of examiners used to perform on-line banking examinations. Two regulators, FDIC and FRS, designed their examination procedures to mainly assess the safety and soundness aspects of Internet banking, such as the appropriateness of an institution’s strategic planning, internal controls, and operating policies and procedures. These regulators said that, due to the orientation of the examination procedures, safety and soundness examiners generally conducted examinations that included a review of Internet banking. If, in the judgment of the safety and soundness examiner, a more sophisticated assessment of an institution’s Internet banking activities were needed, more technically proficient information system specialists were to be called in to perform a separate assessment. In contrast, OCC said that information system specialists conducted most of its Internet banking examinations, utilizing procedures that included more technical aspects of an institution’s Internet banking activities, such as policies addressing passwords, firewalls, encryption, and physical security. OCC requires that most Internet banking examinations be conducted by information system specialists because it believes that the technology-related aspects of Internet banking require examiners with expertise in information systems. OTS also requires the use of information systems examiners for examinations of complex or large institutions. Small or less complex institutions are to be examined by safety and soundness examiners. Regulators also differed in the degree to which their examiners were trained in on-line banking systems. FDIC, FRS, and OTS initiated training programs for their safety and soundness examiners on electronic-banking issues. Topics in the training programs included electronic banking trends and developments, risks and vulnerabilities, and regulatory concerns. At the close of our review, FDIC said that it had trained nearly all of its safety and soundness examiners, and OTS said that it expected to complete their training for safety and soundness examiners by the end of 1999. FRS officials also said that they expected to complete an initial training program for safety and soundness examiners by the end of 1999. These officials added that additional training would likely be required as Internet banking activities evolve and a greater understanding of the risks is developed. FDIC also had developed a training program that provided more in-depth information systems training to a group of information systems examiners and certain safety and soundness examiners. After the training, these examiners were expected to provide services that ranged from providing verbal consultation to other safety and soundness examiners who were conducting an examination of an institution’s Internet banking activities, to independently performing information system reviews of complex on-line banking systems. OCC planned no on- line banking training of its safety and soundness examiners because on- line banking examinations were performed by information system specialists. Rather than establishing an in-house training program for these specialists, OCC said that it relied solely on external training opportunities, such as seminars and conferences hosted by FFIEC and the Bank Administration Institute. The differing methods and approaches utilized by the regulators were too new for their overall effectiveness to be evaluated. Over time, sharing of information among the regulators on the success of these varying methods and approaches could help them assess the strengths and weaknesses of their individual programs. Joint regulatory examinations of the operations of third-party firms providing depository institutions’ Internet banking support services might increase the economy and efficiency of federal oversight of Internet banking activities. This would be particularly true if regulators could share technical expertise in developing and conducting examinations. In late 1998, the five regulators initiated a joint research project to study Internet banking support services provided by third-party firms. However, the extent to which this interagency group will be able to commit the necessary resources to this effort is unclear. Also, NCUA’s authority to conduct examinations of third-party firms is set to expire on December 31, 2001, and the lack of such authority in the future could limit the effectiveness of the oversight provided to firms providing services to credit unions. According to NCUA, third-party firms providing credit union services are not likely to be included in any joint regulatory examinations because these firms typically only provide services to credit unions, and other regulators thus have little incentive to select these firms for a joint review. Joint interagency examinations of traditional third-party data-processing firms, such as check-processing centers, have tended to focus on large multiregional data-processing providers serving banks and thrifts and supervised by more than one supervisory agency. Regulators determined that it was more effective and efficient to conduct one interagency information systems examination instead of several separate examinations by each regulator. The regulators said that these examinations, for the most part, are conducted by examiners with expertise in information systems. In conducting these examinations, examiners and specialists from the participating regulators are to examine the policies, procedures, and practices of the third-party firm and make suggestions to the firm for improvements, if necessary. According to one regulator, two of these examinations have also included a partial review of two firms’ Internet banking operations. In late 1998, the banking regulatory agencies that comprise FFIEC initiated a special research project to study third-party firms that provide Internet banking software or services to banks and thrifts. The objectives of the project are to develop an understanding of the products and services offered by such third-party firms, identify risks and supervisory issues, and develop recommendations regarding supervisory oversight. The regulators said that the outputs from the project have not been determined but that they could include background materials to aid bank examiners, internal policy papers, supervisory guidance for institutions, or recommendations for development of examination programs or procedures. They added that the scope of the project and timetable for its completion are contingent upon available resources, which have been significantly curtailed due to the agencies’ Year 2000 supervision program. As of March 1999, agency staff were gathering information on third-party firms that provided Internet banking services and preparing invitations to selected firms to discuss their services. At this initial stage of the project, regulators said they were not examining the firms but instead obtaining background information. While NCUA has recently begun to participate in the joint agency study of third-party firms, it had not participated in any joint reviews of third-party Internet banking firms or independently conducted any reviews of third- party firms serving credit unions. About 13 firms provide the bulk of these services to credit unions. One of these firms provides services to about 51 percent of the credit unions offering Internet banking. NCUA officials cited the lack of technical expertise as a key reason for their inactivity. Further, NCUA officials said that, on the basis of discussions at a January 1999 FFIEC planning meeting, it appeared unlikely that other regulators would participate with NCUA in joint reviews of third-party firms servicing credit unions. The NCUA officials explained that regulators typically provide staff and resources to a particular joint review when there is a regulatory overlap involving firms that provided services to both banks and thrifts. In the case of third-party firms servicing credit unions, other types of depository institutions have received few if any services from these firms. Since 1962, FDIC, FRS, and OCC have had the authority through the Bank Service Company Act to examine the performance of certain services provided by third-party firms that affect the safety and soundness of bank operations. In deliberations prior to enacting the Bank Service Company Act, Congress made it clear that banks could not avoid examinations of banking functions by outsourcing the functions to third-party firms. The legislative history shows that Congress intended that banking regulators be able to examine all bank records and that they must be able to exercise proper supervision over all banking activities, whether performed by bank employees on the bank’s premises or by anyone else on or off their premises. Regulators generally believe that this authority is important because it allows them to take a broader approach to examining the services of banks or thrifts and their providers. These examinations are not intended to replace a depository institution’s oversight and monitoring of its third-party firms, which remains the responsibility of the depository institution. Instead of examining particular services that a third-party firm provides to a single bank or thrift, regulators can assess the entire broad range of services a third-party firm provides to the banking industry. In addition to being a more direct approach, most regulators believe such examinations also may be more efficient and effective. Over time, the authority to examine third-party firms has become even more important, as depository institutions have contracted out an increasing proportion of their operations. FRS officials noted, however, that such examinations (1) extend bank supervision outside the banking industry, (2) may unnecessarily consume scarce government resources unless effectively risk focused, and (3) may create a moral hazard by undermining the incentive for banks and thrifts to manage their service provider relationships effectively. In March 1998, NCUA and OTS were given authority to examine certain third-party firms through the Examination Parity and Year 2000 Readiness for Financial Institutions Act (the Parity Act). Specifically, the Parity Act gave NCUA and OTS independent authority to examine services provided by service providers to credit unions and thrifts by amending the Federal Credit Union Act and the Homeowners’ Loan Act, respectively. The acts primarily focus on ongoing computer services and turnkey operations in which transactions are transmitted at the end of the day to a central location. Specifically, NCUA and OTS are authorized to examine data processing, information system management, and the maintenance of computer systems that are used to track everything from day-to-day deposit and loan activity to portfolio management at a depository institution. While NCUA and OTS have the same authority under the Parity Act, the act specifically sunsets NCUA’s authority on December 31, 2001. According to NCUA officials, and a review of the legislative history surrounding this action, NCUA’ s authority was sunset because the Parity Act focused primarily on Year 2000 computer problems that for the most part were expected to be resolved by the Year 2000. In addition, at the time the Parity Act legislation was being considered, one credit union trade association strenuously objected to strengthening NCUA’s examination authority. As a result a compromise was reached that NCUA’s authority would be sunsetted. Unless Congress amends the sunset provision, NCUA will not have the third-party oversight authority already provided to all other banking regulators. This is of particular concern because NCUA officials said that most credit unions offering Internet banking services lack in- house expertise and rely in part or totally on third-party firms to provide such services. In its comments on a draft of this report, NCUA officials stated that the agency plans to request Congress to amend the Parity Act to provide permanent supervisory authority over service providers. Internet banking is a relatively new and rapidly growing activity that presents various types of risks that are of concern to banking regulators. At the time of our review, too few examinations of Internet banking had been conducted to identify the extent of potential Internet banking-related problems industrywide. Nonetheless, the examinations we reviewed revealed that some depository institutions had not taken all the necessary precautions to mitigate on-line banking risks. As banking regulators conduct more Internet banking examinations, they could usefully pool and share their findings to establish the extent of such problems industrywide. Sharing information on such findings could provide regulators with information to better understand the risks posed by Internet banking, allow regulators to better monitor industry trends, make more informed decisions on the scope and timing of examinations, and allocate limited resources among competing priorities. At a time when Internet banking appears to be accelerating rapidly, banking regulators either have or plan to utilize a variety of means to identify depository institutions that are already offering Internet banking services. However, OTS and FDIC were the only regulators with procedures to gather centralized information on depository institutions’ plans to offer Internet banking. OTS required that it receive advance notification of a depository institution’s intentions, and FDIC required its examiners to collect information on an institution’s Internet banking plans for inclusion in a centralized database. Such early identification procedures could enable regulators to provide more timely and specific risk management guidance and advice to depository institutions, and the procedures could also provide the regulators useful information to assess the scope and timing of future examinations and determine the need for examiners with information technology expertise. Given concerns that some institutions, particularly smaller ones, might move too quickly into Internet banking because of a desire to remain competitive, regulatory procedures that provide advance notification could be an effective means for regulators to proactively oversee this new and evolving banking activity. With the exception of NCUA, the banking regulators were developing, testing, or implementing new on-line banking examination procedures and had conducted at least some examinations of institutions’ Internet banking services. However, regulators’ examination programs used differing methods in conducting and staffing Internet banking examinations. In addition, differences exist in the degree to which examiners received training on how to examine such activities. As each regulator gains experience in the application of its examination procedures, it could be useful for the regulators to share their findings and approaches to help determine which methods yield the most effective and efficient results. In addition, NCUA, which has reported resource constraints due to the Year 2000 computer problem, has an obligation to help ensure the safety and soundness of credit unions’ Internet banking operations and needs a reasonable strategy to do so once work on the Year 2000 computer problem diminishes. The banking regulators’ joint study of third-party firms providing Internet banking service is a good first step toward providing efficient and effective oversight, because it has the potential to lead to single coordinated examinations. However, it is too early to tell whether the study will result in a proposal to jointly examine third-party firms. Also, NCUA’s authority to examine firms providing Internet banking services expires on December 31, 2001. If this authority is not extended, NCUA will not have the third-party oversight authority provided to other federal banking regulators. Given the expected growth of Internet banking and its attended risks, the lack of such authority in the future could limit NCUA’s effectiveness in ensuring the safety and soundness of the credit unions’ Internet banking activities. Congress may wish to consider whether NCUA’s current authority to examine the performance of services provided to credit unions by third- party firms is needed to ensure the safety and soundness of credit unions and, thus, should be extended beyond December 31, 2001. To help regulators better understand the extent of risks posed by Internet banking and to more effectively evaluate examination methods and procedures, we recommend that, as more experience is gained in conducting examinations of Internet banking services, the heads of the banking regulatory agencies share information on the problems depository institutions have had in operating Internet banking activities as well as which Internet banking examinations methods and procedures they find to be most efficient and effective. We also recommend that the Comptroller of the Currency and the Chairmen of the Board of Governors of the Federal Reserve System and the National Credit Union Administration establish procedures to obtain centralized information on institutions’ plans to offer Internet banking. They should use this information to (1) enhance monitoring of technological trends and innovations and thus their ability to assess emerging security and compliance issues; (2) provide more timely and specific risk management guidance to individual depository institutions, as necessary; and (3) augment the information used to plan the scope and timing of future examinations as well as to plan for the availability of examiners with appropriate information systems expertise. To help ensure that reviews of the adequacy of Internet banking services provided by third-party firms are conducted in a cost-efficient manner, we recommend that, on the basis of the results of its research project, the Chairman of FFIEC through the FFIEC Task Force on Supervision develop plans and a timetable for the regulators’ oversight of third-party firms. To help ensure the safety and soundness of Internet banking at credit unions, we recommend that, as work related to the Year 2000 computer problem diminishes, the Chairman of NCUA expeditiously develop Internet banking examination procedures and begin to examine Internet banking- related activities offered by credit unions. FDIC, FRS, NCUA, OCC, OTS, and FFIEC provided written comments on a draft of this report, and their comments are reprinted in appendixes III through VIII. We also received written or oral technical comments and suggestions from these agencies that we have incorporated where appropriate. In general, the five regulators and FFIEC concurred with the majority of the report’s findings, conclusions, and recommendations. Three specific comments are discussed more fully below, and other more technical comments are discussed in the appendixes. In response to our recommendation that it gather more timely information on institutions’ plans to implement Internet banking, FRS commented that it has enhanced its monitoring and information gathering efforts through routine supervisory contacts, on-site examinations, and informal surveys. The agency also said that it was developing more powerful automation tools to aid more generally in examination planning, review, and reporting. However, FRS did not believe it had seen sufficient evidence on the need for a formal advance notification procedure or preimplementation regulatory reviews for Internet banking, which it said our report appeared to favor. We did not intend to prescribe the specific method(s) for gathering information on depository institutions’ plans to offer Internet banking and have made some changes to clarify this point in our report. The report describes two different methods employed by FDIC and OTS that provide them with useful information on depository institutions’ plans to offer Internet banking. We continue to believe that implementation of one of these methods or an alternative method for obtaining centralized information on depository institutions’ plans is necessary for regulators to (1) enhance monitoring of Internet banking technological trends and innovations and thus their ability to assess emerging security and compliance issues; (2) provide timely and specific risk management guidance to individual depository institutions, as necessary; and (3) augment the information used to plan the scope and timing of future examinations as well as to plan for the availability of examiners with appropriate information systems expertise. FDIC and OTS also disagreed with an inference in the report that smaller institutions were more likely to encounter Internet banking-related problems. FDIC commented that it had observed numerous examples of small banks successfully employing sophisticated technology and believed that it is up to bank management, regardless of the size of the bank, to properly manage any new technology. OTS similarly commented that it did not believe that it is inherently more difficult for smaller banks to properly manage on-line and Internet banking activities and believed that such technology should not be exclusively the province of large institutions. We did not intend to broadly characterize small banks as being technologically deficient and agree that a bank’s success in managing new technology depends on the strength of its management. Our review of 81 examinations of on-line banking assessments showed that examiners found that some small- and medium-sized depository institutions were not taking all of the prescribed precautions to mitigate Internet banking risks. However, the report specifically notes that too few examinations had been conducted to identify the extent of any industrywide Internet banking-related problems. Finally, FRS concurred with the need for the regulators to develop supervisory plans with respect to outsourcing of Internet banking operations by depository institutions. However, it commented that it was not clear whether we were recommending a change in the current policies and practices regarding interagency examinations of service providers or some other form of regulatory oversight. Further, FRS stated that the report provided no evidence of problems at Internet vendor firms that would indicate the need to expand the regulators’ responsibility to oversee directly all providers of Internet banking products and services, and it suggested that the report emphasize that banks, and not bank supervisors, bear the responsibility for monitoring and overseeing their service providers. We are encouraged by the banking regulatory agencies’ efforts to conduct a joint research project designed to develop a greater understanding of the oversight issues associated with assessments of Internet banking products and services offered to banks and thrifts by third-party firms. We believe that joint regulatory examinations of the operations of third-party firms providing depository institutions’ Internet banking support services could increase the economy and efficiency of federal oversight of Internet banking activities. In this regard, our recommendation is intended to ensure that an interagency strategy, instead of individual agency strategies, is developed to examine those third-party firms. We also agree with FRS that banks, and not banking supervisors, are responsible for overseeing their service providers and have added language to the report to emphasize the responsibilities of the depository institutions. However, that does not negate the need for bank regulatory agencies to exercise proper supervision over Internet banking activities, whether performed by bank employees on the bank’s premises or by a third-party firm off the bank’s premises. As arranged with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will provide copies of this report to Representative John J. LaFalce, Ranking Minority Member of the House Committee on Banking and Financial Services; the Honorable John D. Hawke, Jr., Comptroller of the Currency; the Honorable Alan Greenspan, Chairman, Board of Governors of the Federal Reserve System; the Honorable Donna A. Tanoue, Chairman, Federal Deposit Insurance Corporation; the Honorable Norman E. D’Amours, Chairman, National Credit Union Administration; the Honorable Ellen S. Seidman, Director, Office of Thrift Supervision; the Honorable Laurence H. Meyer, Chairman, Federal Financial Institutions Examination Council; and other interested parties. We will also make copies available to others on request. This report was prepared under the direction of Richard J. Hillman, Associate Director, Financial Institutions and Markets Issues, who may be reached on (202)-512-8678 if you or your office has any questions. Key contributors to this assignment are listed in appendix IX. Our objectives were to (1) describe risks posed by Internet banking and any identified industrywide Internet banking-related problems, (2) assess the methods used by regulators to track depository institutions’ plans to provide Internet banking services, (3) determine how regulators examined Internet banking activities, and (4) determine the extent to which regulators examined firms providing Internet banking support services to depository institutions. To identify the risks posed by Internet banking, we interviewed officials from the Federal Deposit Insurance Corporation (FDIC), Federal Reserve System (FRS), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), and National Credit Union Administration (NCUA). We also obtained and reviewed agency documents, including advisory guidance provided to the industry and examiners on risks posed by Internet banking. We also interviewed 8 representatives from selected small-, medium-, and large-sized depository institutions and 11 representatives from related third-party firms to obtain their views on the scope and frequency of examinations and their assessment of risks posed by Internet banking. We selected these depository institutions based on their size and also on the probability that they would offer Internet banking. We identified the third-party firms from the examinations of Internet banking that we reviewed. To determine the methods regulators used to identify depository institutions’ plans to offer Internet banking services and to track growth and technological trends in Internet banking, we reviewed the five agencies’ off-site monitoring procedures and interviewed their officials about the requirements each places on the institutions to provide Internet banking information. We also discussed with FDIC officials both their database on banks and thrifts with transactional Web sites and their Electronic Banking Data Entry System. In addition, we reviewed OTS’ recently established requirement on advance notice of a thrift’s plans to implement a transactional Web site. To understand the regulators’ safety and soundness and information systems on-line banking examination programs, which included Internet banking, we reviewed the on-line banking examination policies and procedures from each agency. In addition, we contacted the banking regulators to obtain their safety and soundness and information systems examination reports and workpapers pertaining to on-line banking. Since not all regulators track examinations of on-line banking operations, we could not ascertain how many on-line banking examinations had been conducted. FDIC was the only regulator that was able to tell us the number of on-line banking examinations it completed during the period of our review. FRS did not maintain centrally on-line banking examinations conducted by the various Federal Reserve districts at the time of our review. As such, FRS officials directed us to the Reserve Banks, which maintain examination workpapers and are responsible for scheduling and conducting examinations. We discussed with the San Francisco District Bank staff their on-line banking procedures and related examiner training and obtained copies of examination work papers. We then contacted the New York District Bank, which was field testing the on-line banking procedures. To review additional examinations, we contacted the Atlanta and Kansas City District Banks. OCC was not able to provide the number of on-line banking examinations conducted by its district offices. To obtain this information, we obtained OCC’s listing of national banks with electronic activities and compared the names of the banks on this listing to a list of information system examinations conducted by OCC examiners during our review period. For those banks that appeared on both lists, we then requested a Profile Extract Report for each bank to determine the scope of examination activities. This method resulted in our identifying eight examinations with a scope that included Internet banking. Initially, OTS was also not able to tell us with certainty the number of on-line banking safety and soundness and information systems examinations conducted by its regional offices. To obtain this information, OTS contacted each office for the information because each office maintains its own information and determines its own examination schedule. We were able to identify 81 on-line banking safety and soundness and information systems examinations conducted during the period June 1997 to August 1998. These examinations consisted of 62 FDIC examinations, 6 FRS examinations, 8 OCC examinations, and 5 OTS examinations. We reviewed available on-line banking examinations using a data collection instrument that allowed us to collect information on the extent and scope of Internet banking examinations and any exceptions noted in the workpapers. We then compiled this information in a database, determined the nature of the exceptions, and grouped them by type. Because the examination sample size was small, it was not possible to determine the adequacy of examination procedures, nor could we make any statistical generalizations regarding the safety and security of on-line banking operations. To determine the extent to which regulators examined third-party firms that provided Internet banking services to depository institutions, we interviewed regulatory officials and examiners involved with the examinations we reviewed, as well as 11 selected third-party firms. In particular, we gathered information on the authority regulators have to examine these third-party firms and the nature and extent of joint interagency examinations of traditional third-party data processing firms. With the assistance of our Office of the General Counsel, we researched the Bank Service Company Act and the Examination Parity and Year 2000 Readiness for Financial Institutions Act to determine the regulators’ authority to examine and regulate third-party firms that provide Internet banking services. Our early work on this assignment focused on PC banking, which included both direct-dial computer banking systems and Internet computer banking systems. As our work progressed, it became evident that institutions were moving from proprietary direct-dial to Internet banking and that many institutions initiating on-line banking were offering access via the Internet. We did our work from April 1998 to May 1999 in Washington, D.C.; San Francisco, CA; Los Angeles, CA; Atlanta, GA; Kansas City, KS; and New York, NY, in accordance with generally accepted government auditing standards. Banking regulators have issued guidance to depository institutions on on- line banking. The guidance advises depository institutions that, before implementing on-line banking, including Internet banking, management should exercise due diligence and develop comprehensive plans to identify, assess, and mitigate potential risks and establish prudent controls. Most regulators have also issued policies and procedures to examiners. Table II.1 lists the guidance and policies and procedures published by the regulators. The following are GAO’s comments on the Federal Deposit Insurance Corporation’s letter dated June 1, 1999. 1. FDIC said that it understood the scope of our review to include both PC direct-dial and Internet banking. It suggested that the evolution of the report’s scope be explained in more detail in the background section. We further discuss in appendix I why this report focused on Internet banking instead of reporting on PC banking which also includes direct dial-up computer banking systems. 2. FDIC stated that it has taken several additional steps to address the challenges facing Internet banking supervision, including developing new procedures, increasing the number of information systems examiners, and expanding agency training. A reference to these efforts, which occurred after the completion of our fieldwork, has been added to this report. 3. FDIC requested that the report attribute to the specific regulator the statement that examinations of third-party service providers may be unnecessary and may create “moral hazard.” FDIC said that it did not agree with the statement because it raised questions about the need for examinations of third-party providers. While we believe that regulatory oversight of banking activities outsourced to third-party firms is essential, we also believe the referred-to statement reflects a useful observation— that depository institutions still have the basic responsibility to oversee their third-party firms. In the report, we have attributed the statement to FRS officials. The following are GAO’s comments on the Board of Governors of the Federal Reserve System’s letter dated June 11, 1999. 1. FRS agreed with our recommendation on sharing of experience and expertise and added that FFIEC member agencies have traditionally developed coordinated procedures and guidance in the information technology area. While our recommendation did not specifically address the mechanism to be used to share experience and expertise, we agree with FRS’ suggestion that having FFIEC member agencies develop coordinated examination procedures and guidance would be one way to do this. Such interagency coordination could not only develop a more effective and efficient oversight program but also provide common guidance to the industry. The following are GAO’s comments on NCUA’s letter dated June 3, 1999. 1. NCUA commented that the draft of this report did not recognize the agency’s on-line banking training in 1997 and 1999. The draft report did mention NCUA’s 1997 training. We have added language to this report to recognize NCUA’s planned training in 1999. 2. NCUA commented that the draft of this report did not recognize its development of a draft Electronic Financial Services Questionnaire. We did not specifically mention the questionnaire because it was included in the white paper on “cyber credit union services” that was mentioned in the draft report. 3. NCUA commented that the draft of this report did not recognize its creation of three information systems officer positions. We have added a discussion of these positions to this report. 4. While stating that the agency did not have formalized examination procedures specifically tailored to Internet banking, NCUA commented that the report should recognize that examiners did review Internet banking processes when they became aware of a credit union’s Internet banking program. In the report we state that each of the regulators had policies requiring examiners to determine how various existing or emerging issues facing an institution or the banking industry affected the nature and extent of risks at particular institutions. Since NCUA lacked Internet examination policies and procedures and its examiners lacked training in Internet risks and mitigation controls, we do not believe that NCUA’s approach adequately addresses the Internet banking risks facing credit unions. 5. NCUA commented that the draft of this report should be expanded to recognize its work with state regulators. We have made this change. 6. NCUA commented that the report seems to imply that guidance initiated to date by regulators is missing the mark. We did not intend to imply this. To the contrary, as NCUA said, regulatory guidance to the entire industry on risks posed by Internet banking is a necessary first step. However, as noted in a later section of the report, we encourage regulators to take the next step, which is to work with individual institutions that examiners find are not sufficiently prepared to mitigate risks posed by Internet banking. The following are GAO’s comments on the Office of the Comptroller of the Currency’s letter dated June 3, 1999. 1. While stating that the agency did not collect information centrally for banks planning to offer Internet banking or require advance notification, OCC commented that it does conduct a quarterly review of a bank’s risk profile, which would include significant changes in bank products or services. According to OCC’s guidance to examiners, examiners are to assess the overall condition and risk profile of the bank, but they need not answer or complete optional steps. Assessing changes in technology, such as Internet banking, is an optional step in the guidance. OCC’s efforts to use other methods to collect information on a bank’s Internet banking plans will enhance information gathered during its quarterly reviews and achieve the intent of our recommendation. The following are GAO’s comments on OTS’ letter dated June 3, 1999. 1. OTS commented that the draft of this report did not include information on its Web site reporting requirement and the agency’s national database. We added language to this report discussing both points. 2. OTS commented that the draft of this report did not discuss compliance examinations that are conducted to assess an institution’s compliance with consumer protection laws and regulations. We have added to this report a discussion of compliance examinations. 3. OTS referred to a section of the report that discusses after-the-fact methods used by other regulators to obtain information that OTS gathers through its advance notice requirement. OTS commented that it was proactively supervising thrifts as evidenced by its thrift notice requirement. We agree and believe that the report clearly reflects that. 4. OTS commented that the draft of this report suggested that the agency only examined Internet banking activities through its safety and soundness examination program. We added language to this report discussing compliance examinations. We also have added language to clarify that we are referring to safety and soundness and information systems examinations. In addition to those named above, Abiud Amaro, Bruce Engle, Robert Pollard, Nolani Traylor, and Karen Tremba made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal oversight of depository institutions' Internet banking activities, focusing on: (1) the risks posed by Internet banking and the extent of any industrywide Internet banking-related problems; (2) the methods used by regulators to track depository institutions' plans to provide Internet banking services; (3) how regulators examined Internet banking activities; and (4) the extent to which regulators examined firms providing Internet banking support services to depository institutions. GAO noted that: (1) Internet banking heightens various types of traditional banking risks of concern to regulators, including strategic, compliance, security, reputation, and transactional risks; (2) as provided in regulatory guidance to banks, savings and loan associations, and credit unions, these risks should be managed through implementation of risk management systems that emphasize active board and senior management oversight, effective internal controls, and comprehensive and ongoing internal audit programs; (3) examinations of Internet banking that GAO reviewed found that some depository institutions were not taking all the necessary precautions to mitigate Internet banking risks; (4) while deficiencies were found, none of these examinations reported any financial losses or security breaches; (5) during GAO's review, too few examinations had been completed to identify the extent of any industrywide Internet banking-related problems; (6) regulators use a variety of methods to identify depository institutions that are already offering Internet banking services, however, only two regulators had systematically obtained centralized information on depository institutions' plans to provide such services and had a database of this information at the time of GAO's review; (7) the Office of Thrift Supervision recently established a requirement that depository institutions: (a) notify it in advance of plans to establish a transactional Web site; and (b) report their Web site address in quarterly Thrift Financial Report filings; (8) the Federal Deposit Insurance Corporation developed a centralized database that contains information on a depository institution's plans to provide Internet banking services; (9) most regulators were developing, testing, or implementing new on-line banking examination procedures, which included procedures for examinations of Internet banking, and most had conducted at least some examinations of depository institutions' Internet banking operations; (10) the Federal Reserve System (FRS) and the Office of the Comptroller of the Currency do not require that an institution's new Internet banking activity be thoroughly examined; (11) the National Credit Union Administration (NCUA) was the only regulator that had not developed requirements and procedures for Internet banking examinations; and (12) each regulator has the authority to examine depository institutions' banking services provided by a third party and to avoid duplication of effort, regulators often cooperate in examining third-party firms.
IRS’ telephone assistors are located at 25 call sites around the country. In the 1999 filing season, IRS made major changes to its telephone customer service program. For example, IRS extended its hours of service to 24 hours a day, 7 days a week. IRS officials said they believed around-the- clock assistance would improve the level of service by distributing demand more evenly and support IRS’ efforts to provide world-class service by making assistance available anytime. Also in 1999, IRS began managing its telephone operations centrally at the Customer Service Operations Center in Atlanta by using new call-routing technology. IRS’ call router was designed to improve the overall level of service, as well as lessen disparities in the level of service across sites by sending each call to the first available assistor nationwide who had the necessary skills to answer the taxpayer’s question. As part of this centralized management, IRS developed its first national call schedule that projected the volume of calls, for each half-hour, at each of IRS’ 25 call sites, and the staff resources necessary to handle that volume. As in previous years, in the 2000 filing season, IRS had three toll-free telephone numbers taxpayers could call with questions about tax law, taxpayer accounts, and refunds. The three primary measures IRS used to evaluate its telephone performance were level of service, tax law accuracy, and account accuracy. IRS measures its level of service by determining the rate at which taxpayers that call IRS actually get through and receive assistance. Level of service is calculated by dividing the number of calls answered by the total call attempts. Calls answered is defined as calls that received service, either from assistors or telephone interactive applications. Total call attempts includes repeat calls and is the sum of calls answered, calls abandoned by the caller before receiving assistance, and calls that received a busy signal. IRS’ tax law accuracy and account accuracy rates are based on a sample of nationwide calls that quality assurance staff listen in on and score for accuracy. Using IRS’ Centralized Quality Review System, staff in Philadelphia listen to sample calls from beginning to end and determine whether the assistors provide accurate answers, follow procedural guidance to ensure a complete response, and are courteous to the taxpayers. If the assistors fail to adhere to any part of the guidance, or are not courteous to the taxpayers, the calls are counted as inaccurate. IRS began centrally monitoring calls to measure tax law accuracy in fiscal year1999 and account accuracy in fiscal year 2000. To address our objectives, we examined documents and interviewed IRS officials. Specifically: to assess IRS’ performance in the three main telephone assistance toll-free numbers, we compared its 2000 filing season level of service, tax law accuracy, and account accuracy with its performance in the 1998 and 1999 filing seasons and its performance targets, and discussed with IRS officials how its performance compared with world-class customer service; to identify the key factors and describe how they affected performance in the 1999 and 2000 filing seasons we interviewed IRS officials, including executives, division chiefs, and first-line supervisors in Customer Service Field Operations and at call sites; and analyzed documents, including various reports that described and analyzed the factors that affected IRS’ performance; to assess IRS’ process for analyzing its performance in the 1999 and 2000 filing seasons in order to make improvements, we interviewed IRS officials, including National Office and Customer Service Field Operations officials responsible for collecting and analyzing data on IRS performance; and analyzed documents, including various reports related to the process, such as the 1999 National Office business review and statistical analyses of 2000 filing season performance; and to determine the basis for restricting supervisors from using productivity data to evaluate or discuss telephone assistor performance, we interviewed IRS officials, including officials in the Organizational Performance Division and Customer Service Field Operations; and analyzed documents related to the restriction, including the Internal Revenue Manual and materials used to train supervisors on the use of statistics. We performed our work at IRS’ National Office in Washington, D.C.; Office of the Chief, Customer Service Field Operations, and Customer Service Operations Center in Atlanta; and the telephone assistance call sites in Atlanta, Dallas, and Kansas City, KS. We chose these three sites in order to include sites of various sizes, hours of operation, and work. We did not independently assess the accuracy of IRS’ performance data, however, we verified that IRS had procedures in place intended to ensure data reliability. We did our work from January 2000 through February 2001 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue in a letter dated April 2, 2001. The comments are discussed at the end of this report and reprinted in appendix I. IRS telephone assistance showed mixed results in the 2000 filing season. Performance improved somewhat in the 2000 filing season as compared with 1999, but according to IRS officials, fell short of IRS’ long-term goal to provide world-class customer service. While IRS had not established specific measures and goals for world-class service, it was considering adopting some of those used by leading telephone customer service organizations. In the 2000 filing season, IRS answered 36.1 million of the 61 million calls taxpayers made, resulting in a 59-percent level of service—better than the 50 percent IRS achieved in the 1999 filing season and its target of 58 percent, but short of the 69 percent IRS achieved in the 1998 filing season. IRS provided accurate responses in 73 percent of the tax law calls it answered—unchanged from 1999 and lower than its 2000 target of 80 percent. Account accuracy in the 2000 filing season was slightly lower than IRS’ target of 63 percent. Table 1 shows IRS’ performance during the 1998-2000 filing seasons. IRS officials in National Office and Customer Service Field Operations recognized that telephone performance in the 2000 filing season fell short of its long-term goal of providing world-class customer service--assistance comparable to that provided by leading public and private telephone customer service organizations. IRS has not defined world-class service in terms of specific measures and goals. However, IRS officials have acknowledged the need to change their performance measures to be more consistent with leading telephone customer service organizations. IRS’ level of service measures the percentage of call attempts that receive assistance, with no consideration of how long callers wait for it. Some leading organizations measure service level as the percentage of calls answered within a specified period of time, such as answering 90 percent of calls within 30 seconds. IRS was considering adopting a similar measure and goal. However, IRS’ performance in fiscal year 2000 fell substantially short of this level, with only 31 percent of calls being answered within 30 seconds. A number of interrelated factors influenced IRS’ telephone assistance performance in the 2000 filing season. According to IRS, some of the key factors were the demand for assistance, staffing levels, assistor productivity, assistor skills, and IRS’ guidance for assistors. Additionally, many of the factors were interrelated—changes in one factor could cause changes in others. According to an analysis by Customer Service Field Operations officials, IRS was able to answer a greater percentage of calls in the 2000 filing season compared with 1999 because demand for service substantially decreased. IRS measured demand in two ways: total call attempts and unique telephone number attempts. Total call attempts includes repeat calls and is the sum of calls answered, calls abandoned by the caller before receiving assistance, and calls that received a busy signal. The unique telephone number measure is designed to count the number of taxpayers who called, rather than the number of calls. It measures the number of calls from identifiable telephone numbers, and counts all call attempts from each telephone number as one call until it reaches IRS and is served, or until a 1-week window expires. Total call attempts decreased from 83.5 million in 1999 to 62.8 million, a 25-percent decrease, while unique number attempts decreased from 33.2 million to 25.9 million, a 22- percent decrease. According to IRS, demand declined partly because IRS issued 1.8 million fewer notices to taxpayers asking them to call IRS about such issues as math errors IRS detected while processing returns. Also, fewer taxpayers called about the status of their refunds because IRS processed returns more quickly. Additionally, timing of notices IRS sends taxpayers influences demand for assistance. For example, as we previously reported, in the 2000 filing season, because of contract delays, a contractor mailed the bulk of over 1 million notices to taxpayers over a 2-week period, rather than over a 7- week period as intended. When taxpayers called about the notices, IRS was unprepared to answer the unexpected increase in the number of telephone calls, which caused level of service to decline during this period. According to IRS officials, a factor that may have prevented the level of service from being higher in the 2000 filing season was IRS’ decision to reduce the staff dedicated to telephone assistance as compared with 1999. Specifically, in the 2000 filing season, IRS dedicated 4,912 staff years to telephone assistance as compared to 5,339 staff years in 1999, an 8-percent decline. According to IRS officials, IRS dedicated fewer resources to telephone assistance to increase staffing in other programs, including the telephone collection system, adjustments, and service center compliance. IRS managers were concerned that in 1999, when IRS redirected resources from these other programs to telephone assistance, the backlog in these programs increased to unacceptable levels, causing uneven service and a decline in collection revenues. Assistor productivity is another factor that affects the level of service taxpayers receive from IRS. According to IRS officials, the level of service would have been higher had assistor productivity not declined in the 2000 filing season. This decline was in addition to a productivity decline that occurred in the 1999 filing season. According to analysts and officials in Customer Service Field Operations, a key indicator of productivity is the average time for an assistor to handle a call. Handle time is the total of the time an assistor spends talking to the taxpayer, the taxpayer is on hold, and the assistor spends in “wrap status”, which is the time between hanging up at the end of a call and indicating readiness to receive another call. An IRS analysis showed that the average handle time increased from 318.5 seconds in the 1999 filing season to 371.5 seconds in the 2000 filing season, or about a 17-percent decline in productivity. According to a Treasury Inspector General for Tax Administration report, an increase in the number of calls an assistor handles has a profound effect on level of service. For example, if assistors had handled one more call per hour, IRS would have answered more than 8.5 million additional calls during the first 6 months of fiscal year 1999. While IRS had not determined all the causes of the decline in productivity since 1998, according to a July 2000 IRS study, approximately 58 percent of the productivity decline from 1999 to 2000 was due to assistors’ receiving a greater percentage of calls that took longer to handle. For example, screening calls, in which the assistor talked with the taxpayer for only a short time to determine the taxpayer’s question and where the call should be routed, decreased from 35 percent of the calls assistors handled in 1999 to 21 percent in 2000. The study concluded that assistors likely handled fewer of these calls because IRS changed its telephone message to discourage callers from posing as rotary dialers without a touch-tone telephone, allowing them to bypass the menu system and go directly to an assistor. This study did not identify what caused the remaining 42-percent decline in productivity in 2000. According to IRS officials, four policy changes that lowered productivity in the 1999 filing season continued to adversely affect productivity in the 2000 filing season. Specifically, in 1999, IRS discontinued automatically routing another call to an assistor immediately upon completion of a call; increased restrictions on using productivity data when evaluating assistors’ performance; disproportionately diverted staff from the peak demand shifts to shifts when fewer taxpayers call when it implemented its 24-hour-a-day, 7-day-a- week assistance; and discontinued measuring productivity of individual call sites. First, as part of its November 1998 agreement with the National Treasury Employees Union, IRS discontinued using a call management tool—”auto- available”—that automatically routed another telephone call to an assistor as soon as a call was completed. Instead, assistors were placed in “wrap status” after each call and were unavailable until they pressed a keyboard button that made them available. Wrap status was designed to allow assistors time to document the results of a call or to allow them to take a momentary break after a stressful call. According to IRS officials, allowing assistors to determine when they were ready to take another call added time to each call, causing other callers to wait longer for service. With longer wait times, many taxpayers hung up before reaching an assistor, thereby reducing level of service. According to IRS statistics, for its tax law, account, and refund assistance lines, the average wrap times increased 94, 204, and 176 percent, respectively, from 1998 to 1999. Second, 1999 was the first filing season with increased restrictions on supervisors using productivity data to evaluate assistors’ performance or discuss their performance. Some IRS studies of the 1999 filing season concluded that the restrictions negatively affected productivity. For example, one IRS study found that many site managers were concerned about their inability to properly manage assistors’ use of wrap time without using productivity data. Five of the seven supervisors we spoke to about the 2000 filing season said they were dissatisfied with the restrictions. They said assistors know supervisors are restricted from using productivity data to evaluate employees’ performance and that supervisors do not have adequate time to devote to monitoring and physical observation. Therefore, they said assistors are free to spend more time than necessary in wrap status. Our conversations with IRS officials, including supervisors at call sites and officials in the Organizational Performance Division, and review of related documents indicated officials were uncertain about the basis for the restriction, and some thought that it was mandated by the IRS Restructuring and Reform Act. We discuss this issue near the end of this report. Third, increasing the hours of telephone assistance to 24 hours a day, 7 days a week for the 1999 filing season may have decreased overall productivity because IRS disproportionately shifted staffing away from the hours when most taxpayers call. According to an IRS review, the diversion of staff away from hours when most taxpayers called resulted in a lower level of service because taxpayers waited longer for assistance, more taxpayers hung up while waiting, and demand increased because taxpayers redialed more. Limited data from a week in the 2000 filing season indicated that IRS continued to overstaff the night shift when compared to the other shifts. For example, for the week of April 2, 2000, through April 8, 2000, assistors working the night shift spent, on average, 44 percent of their time waiting to receive a call, whereas assistors working the day and evening shift spent 15 percent of their time waiting to receive a call. An IRS Customer Service Field Operations official responsible for scheduling staff said assistors spent more time waiting for calls at night because, when compared with the demand for assistance, IRS scheduled disproportionately more assistors during the night shift than other shifts. Assistors working nights generally had fewer skills, which required a disproportionate level of staffing to ensure that all needed skills were available. According to the official, IRS’ attempts to attract more skilled assistors to work off-peak hours were unsuccessful. To counter the negative effects of staffing the extended hours, for fiscal year 2000, IRS limited its staffing of tax law assistance to 16 hours a day, 6 days a week after the filing deadline, when fewer taxpayers call with tax law questions. Fourth, beginning in 1999, IRS no longer had a performance measure that held sites accountable for productivity. Instead of measuring level of service as it had in the past, IRS measured a site’s performance on the number of assistors assigned to answer telephone calls each half-hour as compared to the number of assistors specified in the site’s half-hour work schedule. IRS made this change, in part, because the sites were no longer responsible for predicting and meeting demand. According to an IRS assessment of the 1999 filing season, replacing the site level of service measure with the measure of assistor presence diminished the focus on productivity and the extent to which sites sought opportunities to improve productivity. IRS Customer Service Field Operations officials added that, despite the decline in productivity, taxpayers might have received better service overall if assistors took the time needed to fully resolve each taxpayer’s call, rather than being concerned about the number of calls answered. However, IRS had not determined if the decline in productivity had improved the quality of service. According to IRS officials, including the Commissioner, Customer Service Field Operations officials, and supervisors at call sites, the accuracy rates IRS achieved in the 2000 filing season continued to be adversely affected by assistor skill gaps—the difference between the skills assistors had and the skills needed by IRS. Skill gaps were caused, in part, when IRS implemented its new call router in 1999. With the call router, individual assistors were required to answer calls on a broader range of topics, often without adequate training or experience. Before the 1999 filing season, each call site decided how it would group topics for routing and assistor specialization. According to a cognizant official, the number of topic groups at sites ranged from 40 to 125, which allowed assistors to typically specialize in only one or two topics. Because the new call router could not handle differences in topic groups among call sites, nor efficiently route calls to that many groups, the topic groups had to be standardized and were reduced to 31. This increased the number of topics in each group, which typically required an assistor to answer calls on five or more tax law topics, creating a skill gap. IRS officials recognized that assistors had struggled with the amount of information they were required to know in 1999, so for the 2000 filing season IRS increased the number of topic groups to 46, which decreased the number of topics in each group. However, according to IRS officials, the loss of specialization continued to affect accuracy in the 2000 filing season. IRS officials said they were aware of how skill gaps had negatively affected the accuracy of the assistance taxpayers received in 1999 and, in August 1999, IRS began to revise its training materials to better prepare assistors to answer questions in their assigned topic groups. However, according to IRS officials, much of the new training material was not developed in time for the 2000 filing season. Furthermore, a cognizant IRS official said the first attempt to revise the training did not separate each topic into a self-contained course. For the 2001 filing season, IRS revised its training material so that each course contained only one topic, enabling IRS to provide assistors with just-in-time training on the specific topics they were assigned to work. IRS officials said organizational changes are needed to further reduce the number of topics assistors are expected to know. In a May 2000 memo, the Commissioner cited low accuracy scores and employee survey comments as evidence that IRS was expecting its assistors and managers to have knowledge in areas that are far too broad and that IRS was “attempting the impossible” by trying to fill skill gaps solely with training. IRS officials said IRS’ reorganization would allow specialization by taxpayer group, but that even greater levels of specialization were needed. Accordingly, as part of its restructuring efforts, in June 2000, IRS began long-term planning efforts to create greater specialization at both the call site and assistor levels. The quality of the guidance assistors used also affected whether they provided accurate assistance. IRS officials at National Office and call sites said the guidance assistors used in the 2000 filing season to respond to account questions was confusing and difficult to use, causing assistors to make mistakes, thereby lowering the accuracy rate. IRS officials said that over the years, the Internal Revenue Manual—the assistors’ guide for account questions—had grown from a collection of handbooks to a large, unwieldy document with duplicative and erroneous information. According to IRS officials, errors in the Manual had long been a problem for which sites had developed local “workaround” procedures. IRS established a task force to correct these problems, and issued a new draft version at the end of the 1999 filing season. While the draft Manual was smaller and contained less duplicative and erroneous information, it was missing some needed information and cross-references. However, IRS did not realize the extent of the problems with the Manual until October 1999, when it began holding assistors accountable for strictly adhering to the Manual as part of its central monitoring of account accuracy. As a result, the draft was recalled, and the task force continued to make corrections to the Manual throughout the filing season. The task force issued two new versions in February 2000 and May 2000. According to IRS officials, the frequent changes in the Manual made it difficult for assistors to know which version to use, sometimes leading to inaccurate answers. According to IRS officials responsible for Manual revision, as October 1, 2000, the task force had corrected problems with the Manual and related training material in time for the 2001 filing season. Additionally, IRS officials said they implemented a new guide in October 2000 to make it easier for assistors to follow the proper steps and provide accurate assistance to taxpayers with account questions. Determining how each factor affects level of service and accuracy is made even more difficult because many of the factors are interrelated; changes in one can affect another. For example, the demand for assistance, or the number of call attempts, is influenced by the level of productivity. Fewer incoming calls make it easier for a given number of assistors to answer a greater percentage of incoming calls. Answering a greater percentage of incoming calls—a higher productivity level—reduces the number of repeat calls, which reduces the number of calls overall. Similarly, the quality of guidance assistors use affects not only accuracy, but also demand. While step-by-step guidance on how to respond to questions would likely improve accuracy levels and service for some taxpayers, it could also cause assistors to take more time answering the call, lower productivity, and increase the number of taxpayers who are unable to get through, causing them to redial, and thereby increase demand. IRS’ analysis of its telephone assistance performance in the 1999 and 2000 filing seasons was incomplete. Although IRS collected various data and conducted several analyses of performance, the approach either did not assess or assessed incompletely some of the key management decisions and other factors that affected performance. As a consequence, IRS management had less information than it could have on which to make decisions intended to improve future performance. IRS undertook many efforts in 1999 and 2000 intended to identify factors that affected performance. For example, IRS conducted a best practices productivity study in 1999 to identify best practices among IRS call sites and why productivity varied among them; reviewed its implementation of 24-hour–a-day, 7-day-a-week assistance to determine its effects on such things as costs and quality of assistance; conducted local and centralized monitoring of telephone calls to determine what errors assistors made and why; conducted a study in 2000 to determine why productivity had declined; established a filing season critique program in 2000 to solicit information from field staff about their problems and successes during the filing season; and conducted a 1999 fiscal year business review that addressed many of the factors that affected telephone performance. In some of its efforts, IRS began analyzing the data made available through management information systems at its Customer Service Operations Center, which opened in December 1998. For example, as a part of the 2000 productivity study noted above, IRS used statistical analysis to assess how productivity was affected by such factors as the complexity of calls handled and assistor experience and education. In a similar analysis, IRS assessed how call demand was affected by such factors as returns filed, notices issued, refunds issued, refund cycle times, and electronic filing return rates. Although IRS now has better quantitative data to assess its performance and make decisions about ways to improve performance, IRS officials said much work still needs to be done to understand the factors that affect performance. Other leading telephone customer service organizations we studied see the importance of continuous evaluation and incorporating evaluation results to make improvements. As we said in a recent report on management reform, “an organization cannot improve performance and customer satisfaction if it does not know what it does that causes current levels of performance and customer satisfaction.” IRS’ efforts to evaluate the factors affecting telephone assistance were incomplete and failed to provide IRS management with some significant information that could have been used to improve performance. For example, while IRS did several studies of productivity, the studies relied on handle time as the measure of productivity. Other segments of assistors’ time that would affect overall productivity, including time spent waiting to receive a call, time spent away from the telephone (in meetings, breaks, and training), and time assistors were not assigned to answer calls, were not studied. In another example, the most extensive single review of the factors that affected performance—the 1999 National Office business review—did not assess how extending the hours of service to 24 hours, 7 days a week affected level of service. Earlier, we described how IRS’ disproportionate move of assistors to the night shift created differentials between shifts in the time spent waiting for a call. Furthermore, while the National Office review examined the effects of demand on service, it did not examine why demand increased in 1999. Also, IRS did not evaluate the effectiveness of its management decision not to automatically route calls to assistors as soon as they completed a call, or the several other policy changes noted above, even though they were intended to significantly improve overall performance. The gaps in IRS’ information about the factors affecting past performance impaired IRS’ efforts to improve performance. An important example is the decline in productivity, as measured by handle time. As discussed earlier, some IRS officials believe that taxpayers may have received better service overall if assistors took the time needed to fully resolve taxpayers’ calls. However, IRS had not determined whether overall service improved as a result of increased handle time. Also discussed earlier was the quality of guidance provided assistors. IRS did not realize until October 1999 the extent of problems in the Internal Revenue Manual, too late for fixes to be made for the 2000 filing season and sometimes leading to inaccurate answers for taxpayers. IRS’ “balanced measures” performance management system and not the IRS Restructuring and Reform Act of 1998 was the basis for IRS restricting the use of productivity data to evaluate employee performance. The Act, and subsequent regulation, prohibited supervisors from using records of tax enforcement results, or other quantity measures, to impose production quotas on or evaluate employees that make judgments about the enforcement of tax laws. When designing and implementing the balanced measures system, IRS management decided to prohibit telephone assistance supervisors from using productivity data when evaluating all assistors, even those that do not make tax enforcement judgments. The prohibition was intended to promote a more balanced focus by assistors on efficiency, quality, and service. According to Organizational Performance Division officials, the balanced measures system does not prohibit supervisors from using productivity data to monitor employee performance. However, it requires supervisors to “get behind the numbers” and base discussions and evaluations of employee performance solely on the direct review of employees’ work. Officials said IRS’ design of the balanced measures system was heavily influenced by IRS’ environment in 1997 and 1998, during which IRS was under intense pressure from Congress, the administration, and stakeholders to improve service to taxpayers. The National Performance Review Customer Service Task Force and National Commission on Restructuring the IRS had found that IRS’ overall environment and performance measurement focused on productivity to the detriment of service to taxpayers, making employees strive to meet short-term performance and efficiency goals rather than have a balanced focus on efficiency, quality, and taxpayer service. IRS officials said the overemphasis on level of service and other productivity measures had resulted in employees perceiving that productivity was more important than quality, so assistors hurried through telephone calls and served taxpayers poorly, rather than taking the time necessary to give the taxpayer full, quality service. Also, officials said supervisors tended to consider measures as ends in themselves, rather than determining the causes behind employee performance and taking action to improve performance. IRS must significantly improve telephone assistance if it is to meet its long- term goal of providing world-class customer service to the tens of millions of taxpayers that call. While IRS has undertaken efforts to analyze its performance and identify ways to improve, these efforts have been incomplete. IRS’ analyses did not cover all of the key management decisions and other key factors that affect telephone performance. Designing and conducting a comprehensive analysis of the key management decisions and other key factors that affect telephone performance in each filing season will be a difficult task because the factors that affect performance are multiple and interrelated. However, without a more comprehensive analysis of the factors that affect performance, IRS management lacks the information it needs to make decisions to improve performance. We recommend that the IRS Commissioner ensure, as part of its analysis of telephone assistance performance each filing season, that IRS take into account all key management decisions and other key factors that can affect performance, such as implementing 24-hour, 7-day assistance and the decline in assistor productivity, to determine their impact on the quality of service and to make improvements. The Commissioner of Internal Revenue provided written comments on a draft of this report in an April 2, 2001, letter, which is reprinted in appendix I. The Commissioner agreed with our assessment of IRS’ telephone performance during the 2000 filing season and with our recommendation. The Commissioner stated that the assessment of key management decisions and direction should be fully integrated into both the planning process and performance review. He recognized that IRS needed to improve its performance analysis to take into account all key management decisions and other factors that can affect performance. He stated that this would be done as a part of IRS’ annual filing season evaluation. The Commissioner again expressed concern with our comparison of IRS' performance in the 2000 filing season with its performance in the 1998 filing season, commenting that “comparisons to 1998 are not valid due to the changes made to accommodate our technological advance to a national networked system.” As stated in our evaluation of the Commissioner’s comments on our earlier report, we believe it is appropriate to compare IRS’ performance before and after such operational changes. The changes made after 1998 were intended to improve IRS’ telephone service. The only way to tell if service improved is to compare performance levels before and after the changes. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Representative William J. Coyne, Ranking Minority Member of the Subcommittee; Representative William Thomas, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, Committee on Ways and Means; the Honorable Paul H. O’Neill, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We will also make copies available to others upon request. If you have any questions or would like additional information, please call James R. White at (202) 512-9110 or Carl Harris at (404) 679-1900. Key contributors to this report are Ronald W. Jones, Julie Schneiberg, and Sally Gilley.
The Internal Revenue Service (IRS) must significantly improve telephone assistance if it is to meet its long-term goal of providing world-class customer service to the tens of millions of taxpayers who call. Although IRS has tried to analyze its performance and identify ways to improve, these efforts have been incomplete. IRS' analyses did not cover all of the key management decisions and other key factors that affect telephone performance. Designing and conducting a comprehensive analysis of the key management decisions and other key factors that affect telephone performance in each filing season will be difficult because the factors that affect performance are multiple and interrelated. However, without a more comprehensive analysis of the factors that affect performance, IRS lacks the information it needs to make decisions to improve performance.
The Bayh-Dole Act of 1980 (Pub. L. No. 96-517, Dec. 12, 1980) has fostered linkages between universities and businesses by giving universities, other nonprofit organizations, and small businesses the option to retain title to the inventions they make in the course of federally funded research. Before 1980, federal agencies generally retained title rights to any inventions made in the course of the research they funded. Funding recipients seeking to commercialize such inventions often faced long delays and uncertainty when they asked the funding agencies to waive their rights. Since 1980, universities have upgraded and expanded their technology licensing efforts, particularly in such fields as biomedicine and computer technology. Federal agencies and industry also substantially increased their funding of university research—federal funding grew from $8 billion (in 2001 dollars) in fiscal year 1980 to $19.2 billion in fiscal year 2001, and industry funding grew from $461 million (in 2001 dollars) to $2.2 billion during this period. For the Association of University Technology Managers’ survey for fiscal year 2001, U.S. university respondents reported that they (1) executed 3,282 technology licenses and options, (2) received $852 million in gross license income, and (3) held equity in 348, or 70 percent, of the 494 start-up companies that were formed around university-licensed technology. (See app. II for information from our survey about universities’ licensing activities with start-up companies.) OMB Circular A-110 establishes uniform requirements for the administration of federal grants and cooperative agreements with institutions of higher education, hospitals, and other nonprofit organizations. For example, the circular requires that funding recipients submit performance reports to the funding agency at least annually, with a final technical report normally due within 90 days after the grant’s termination or expiration. However, the circular provides flexibility by allowing the agencies to specify the content of these reports or to waive the final technical report. The National Science and Technology Council, established by Executive Order 12881 in November 1993, coordinates the development of governmentwide science and technology policies. For example, the Council’s Subcommittee on Research Business Models is examining the effects of the changing nature of scientific research on business models for conducting federally funded research. In addition, 7 federal agencies, 84 research universities, and 6 other research institutions participate in the Federal Demonstration Partnership (FDP), which seeks to streamline the administrative processes for implementing OMB Circular A-110. In July 1995, the Department of Health and Human Services, which includes NIH, promulgated regulations on Objectivity in Research and NSF revised its Investigator Financial Disclosure Policy to establish consistent requirements for universities and most other grantees to identify and manage financial conflicts of interest. Specifically, the NIH and NSF standards require that funding recipients implement policies for (1) scientists to disclose any “significant financial interests” to an official designated by the institution and (2) institutions to determine whether a real or apparent conflict exists and, if so, take appropriate actions to manage, mitigate, or eliminate the identified conflict. Under these regulations, a conflict of interest exists when the institution’s designated official determines that a significant financial interest could directly and significantly affect the research design, conduct, or reporting. The financial benefit may result, for example, from an investigator owning stock in a company providing the research funding, or from an investigator having ownership interest in a company that may profit from a university invention. Conflicting interests are not necessarily unacceptable, and many can be managed through disclosure and oversight. The NIH regulation exceeds the scope of NSF’s policy in some areas. For example, it requires that universities and other funding recipients report every identified possible conflict of interest, while NSF requires that institutions report only those conflicts that have not been resolved. (See app. III for a more detailed comparison of the NIH and NSF requirements.) Federal agencies rely primarily on the university scientists who receive research grants to make their research results available to the public. Each agency encourages grantees to publish research results in the scientific literature, a practice that is steeped in academic tradition. Agriculture, Defense, Energy, EPA, and NASA also disseminate the results of the research they fund by posting scientists’ final technical reports on their Web sites, and Education is considering whether to post research results. While NIH, NSF, and Education do not post research results on their Web sites, they post certain grant information, including abstracts submitted at the time of the award. The eight agencies we examined rely on university scientists to disseminate the results of the research they fund, and their policies explicitly encourage principal investigators and universities to disseminate those results through presentations at scientific conferences and publishing in scientific journals. (See table 1.) Similarly, FDP’s model terms and conditions for research grants state, “The recipient is expected to publish or otherwise make publicly available the results of the work conducted under the award.” Publishing federally funded research results also is vital to university scientists because research publications are key to obtaining future grant awards, gaining professional recognition, and achieving tenure. Agencies also indirectly encourage the dissemination of research results through their grant award practices. Officials at each agency said that peer review panels consider the publication record of the applicant (usually the principal investigator) in assessing the grant proposal. NSF, for example, requires that principal investigators requesting grant renewals include a list of publications generated with NSF’s prior support. Agriculture officials told us that they are less likely to recommend renewal applications for continued funding if the funded project’s results have not been published. Publications indicate to the agencies that the principal investigator has made progress in his/her research and that the results are available to other scientists in the field. However, a research project may not generate publishable results because leading scientific journals require that manuscripts be reviewed by other experts in the field to validate the research findings prior to publication. The scientific journal may reject a manuscript because, for example, the reviewers conclude that the work adds little value to the field of study, the results are inadequately supported, or the research failed. All but five of the university respondents reported that they have a policy or standard operating procedure that addresses whether sponsors are allowed to delay the publication of research results under certain circumstances, such as reviewing a manuscript for possible proprietary information or for intellectual property. Three universities—the California Institute of Technology, Howard University, and Iowa State University—reported that they do not permit any publication delays, while 160 universities allow a sponsor to review a manuscript prior to publication—typically from 30 to 90 days. However, 10 universities allow a longer period of up to either 120 days or 180 days, and 1 university allows up to 365 days for the sponsor to review a manuscript for proprietary information. Generally, research sponsors appear to adhere to the universities’ time frames for reviewing manuscripts. Administrators reported the following: Fourteen universities were aware of one or more cases during the past 3 years of a sponsor delaying the publication of unclassified and nonsensitive research beyond the university’s time limits. Three universities were aware of one or more cases during the past 3 years of a federal sponsor delaying the publication of research involving sensitive, but not classified, information beyond the university’s time limits. Thirteen universities were aware of one or more cases during the past 3 years of a federal sponsor blocking, or attempting to block, publication of research involving sensitive, but not classified, information. However, several university administrators noted during the pretest of our survey instrument that publication delays can occur without the university’s knowledge if the sponsor and the research team reach an accommodation without notifying university administrators. As shown in table 2, Agriculture, Defense, Energy, EPA, and NASA use their Web sites to post research results, in some form, for grants that they issue. For example, EPA posts summaries of annual and final technical reports on its National Center for Environmental Research Web site. These summaries include research accomplishments or findings, the reporting date, EPA agreement number, title, investigators, institution, research category, project period, objective of research, progress summary, conclusions (if applicable), publications/presentations, future activities, supplemental keywords, and other relevant Web sites. EPA’s Web site also allows users to search for publications associated with a particular grant. NASA primarily posts abstracts of final technical reports on its Web site, although NASA plans to post mainly full technical reports in 2004. While Education, NIH, and NSF do not post research results on their Web sites, they post a project abstract written at the time of award stating how the research will be conducted and what researchers hope to accomplish. In November 2002, the Education Sciences Reform Act of 2002 (Pub. L. No. 107-279) established the Institute of Education Sciences and directed it to widely disseminate the findings and results of scientifically valid research in education. An Education official told us that after the members of the Institute’s National Board of Educational Sciences have been appointed and confirmed, Education will consider how best to fulfill this requirement, particularly for the results of Institute-funded research that have not been peer reviewed. The official noted that the Institute’s National Center for Education Evaluation currently disseminates the results of research performed under contract either through research publications or through its Web site after the results have been peer reviewed. In addition to using their own Web sites, several agencies participate in collaborative Web-based efforts to share information, including research results. For example, Energy’s Office of Scientific and Technical Information maintains Federal R&D Project Summaries, a Web-based portal to summary and award information for Energy, NIH, and NSF research grants. The office also maintains GrayLIT Network, a portal to full-text reports located on the Energy, Defense, EPA, and NASA information systems. In addition, an interagency working group from 11 major science agencies recently initiated the science.gov Web site, which provides a gateway to federal research and development results and other scientific information. Officials at the eight agencies identified both advantages and disadvantages to posting all funded research results on agencies’ Web sites. Most of the agency officials told us that posting technical reports on agencies’ Web sites is an effective way to share information among scientists in the field of research, as well as with the public. In explaining why they have chosen not to post all results on their Web site, NIH and NSF officials cited concerns that grant results posted prior to peer review and publication may be incomplete or incorrect and could mislead other researchers or the public. According to NIH officials, the risk associated with posting results that have not been scrutinized and validated by peer review is simply too great in the biomedical field. In addition, NSF officials were concerned that a scientific journal would reject a manuscript because it views reports posted on the Web as publications. Some agency officials also expressed concern that a final technical report might be posted before the university files a patent application for an invention, thereby preventing it from obtaining a patent. Among the 171 university respondents to our survey, 91 universities (53 percent) supported posting the grantee’s final technical reports on the agency’s Web site, and 31 universities (18 percent) opposed posting the final technical report, while 49 universities (29 percent) either were uncertain or did not respond. Primary advantages that universities cited for posting final technical reports on an agency’s Web site include facilitating the access of other scientists to research results, facilitating collaboration among scientists, providing prompt dissemination of research results, and providing a public record if the results of a research project are not published. Primary disadvantages that universities cited for posting final technical reports are the potential for (1) an invention to be prematurely disclosed, (2) a scientific journal to reject a manuscript because it views posted reports as publications, (3) proprietary information to be disclosed, (4) research results to be prematurely disclosed, (5) incomplete or misleading report results to be prematurely disseminated, (6) an investigator to be to harassed by opponents to the research, and (7) universities to incur added administrative costs in complying with agency requirements. NIH and NSF, the two largest federal supporters of university research, are the only federal agencies we examined that have adopted standards intended to protect against financial conflicts of interest among university grantees. The other six agencies do not require universities and other grantees to identify and manage possible financial conflicts of interest involving their research. According to officials from these agencies, it is the universities’ responsibility to protect against conflicts of interest in university research. While 87 percent of our survey respondents reported that all of their federally funded research is covered by financial conflict of interest policies that are consistent with either NIH’s or NSF’s standards, 17 universities—including 5 universities in the University of California system—reported that they do not extend either the NIH or the NSF financial conflicts of interest requirements to cover research grants funded by other federal agencies. While both NIH and NSF promulgated regulations in 1995 that require universities to implement financial conflict of interest policies, the other six federal agencies do not require that their grantees have similar standards. According to Agriculture and Energy officials, universities should take responsibility for developing and implementing policies for identifying and managing financial conflicts of interest involving their scientists. Defense and NASA officials told us that they have not experienced enough problems to justify adopting financial conflict of interest standards for universities and other grantees. These officials added that the potential for financial conflicts of interest in the scientific fields that they fund is generally lower than in the biomedical field. However, NSF supports research in many of the same fields of research as these agencies. All of the 171 university respondents to our survey reported that they had one or more policies for addressing possible financial conflicts of interest by research investigators. Of the respondents, 148 universities (87 percent) reported having financial conflict of interest policies consistent with either NIH’s or NSF’s regulations that apply to all federally funded research. More specifically, 135 universities (79 percent) reported that they have a single conflict of interest policy that applies to all of their research. These universities’ policies are consistent with one of the 10 guidelines that the Association of American Universities’ Task Force on Research Accountability proposed for managing individual conflicts of interest: “Treat research consistently, regardless of funding source—all research projects at an institution, whether federally funded, funded by a non- federal entity, or funded by the institution itself, should be managed by the same conflict of interest process and treated the same.” In contrast, 17 universities reported that some of the federally funded research they perform is not covered by financial conflict of interest policies that are consistent with either NIH’s or NSF’s regulations. For example, 5 universities in the University of California system reported that their financial conflict of interest policies apply to research funded by NIH or NSF, but not to research funded by other federal agencies. The Massachusetts Institute of Technology and Yale University reported that they have specific policies that cover research funded by NIH and NSF, while their institutional policies cover all other funded research. Six other universities did not provide a response. Overall, 124 universities strongly supported, and 25 universities somewhat supported, creating a single financial conflict of interest policy for all federally funded research. Among the other respondents, 19 universities either did not have a strong opinion or did not respond to the question, while only 3 universities either strongly or somewhat opposed a single financial conflict of interest policy for all federally funded research. The university respondents did not agree, however, on which agency’s standards should serve as the basis for a single federal policy: among the 133 universities that expressed an opinion, 72 preferred the NIH regulation, 56 preferred the NSF regulation, while 5 stated that either would be acceptable. To implement their financial conflict of interest policies, 140 of the 171 universities (82 percent) reported that they require scientists to indicate whether or not a conflict may exist when a grant proposal is submitted; 108 universities (63 percent) require scientists to annually submit financial disclosure forms to appropriate institution officials; and 139 universities (81 percent) require scientists to update financial disclosure forms during the year if new possible financial conflicts of interest are identified. A policy that incorporates all three of these requirements is consistent with the Association of American Universities’ Task Force on Research Accountability guideline: “Disclose financial information to the institution—individuals engaged in research should disclose on an annual basis all financial interests related to university research, and provide updated information when new financial circumstances may pose a conflict of interest and when grant applications are submitted.” All but 6 of the 171 universities reported that they require at least one of these three types of financial disclosure. In addition, 56 universities reported that their policy requires that the federal funding agency be notified whenever a financial conflict of interest is identified. In comparison, 62 universities reported that their policies require that only certain federal funding agencies be notified, 49 universities do not have a policy for notifying federal funding agencies about identified financial conflicts of interest, and 4 universities did not respond about their notification policies. Our survey results indicate that several universities have tightened their policies for financial conflicts of interest in recent years to comply with the NIH and NSF requirements. Specifically, all of our 171 respondents reported that they have financial conflict of interest policies, while a survey reported in the November 2000 issue of the New England Journal of Medicine found that 15 of the 250 institutional respondents (6 percent) did not have a policy on conflicts of interest. In response to the November 2000 survey, NIH reviewed the financial conflict of interest policies of a representative sample of more than 100 universities and other institutions. NIH found that, generally, the institutions had developed policies that reflected a serious desire to inform and assist their investigators in complying with NIH’s regulation. However, NIH found several specific areas of noncompliance and identified four major areas of concern that the institutions’ financial conflicts of interest policies need to address: (1) many policies are not separated from other institutional policies through a distinct part, appendix, or document; (2) investigators face an increased burden because the many policies do not provide electronic links to supporting information; (3) many policies are confusing because their applicability and terminology are unclear; and (4) many policies include numerous examples of vague language or statements. Upon review of our university survey results, officials at Agriculture, Energy, EPA, and NASA told us that OMB should take the lead in developing a uniform, governmentwide requirement for addressing possible financial conflicts of interest that is consistent with NIH’s and NSF’s standards. NIH and NSF officials also supported developing a uniform requirement that is consistent with their standards. Defense officials said they were ready to work with other federal agencies on governmentwide regulations, if regulations are warranted. OMB and OSTP officials believe that the National Science and Technology Council, which OSTP coordinates, is in the best position to develop a uniform financial conflict of interest standard for federally funded research. A fundamental principle of scientific research is that wide dissemination of research results is vital for validating these results and advancing the field of science. Posting final research reports, or similar information, on federal agencies’ Web sites can advance scientific research by providing other scientists with timely access to research results and facilitating collaboration. Posting this information also provides access to members of the public interested in the research and a public record if the results of agency-funded research are not published, thus maximizing the benefit of the federal investment. For these reasons, five federal agencies, including Energy and NASA, already routinely disseminate research results through their Web sites. While posting research results might create concerns in some fields, such as biomedical research, these concerns are less applicable for Education, which like Energy and NASA, has a specific statutory requirement to widely disseminate research results. The growing relationship between universities and businesses since passage of the Bayh-Dole Act has led to an increase in possible financial conflicts of interest, as businesses have increased their funding of university research and some universities have collected more than $10 million in royalties in a given year for technologies they have developed. In response to the NIH and NSF requirements, all of the universities we surveyed have implemented policies for identifying and managing possible conflicts of interest. However, some universities have not extended their policies to cover research funded by other agencies, which also provide substantial amounts of research funding, and OMB Circular A-110 does not address financial conflicts of interest. Unless all federal agencies require that universities have appropriate conflict of interest policies, the government cannot ensure that safeguards are in place to protect the integrity of scientific research, and the public’s investment. To better ensure that the findings and results of scientifically valid research in education are widely disseminated, we recommend that the Secretary of Education direct the new Institute of Education Sciences to post the final technical reports of the research it funds on its Web site. To safeguard against bias in the design, conduct, or reporting of federally funded research, we recommend that the National Science and Technology Council coordinate the development of uniform federal requirements for universities and other funding recipients to identify and resolve financial conflicts of interest. The NIH and NSF standards provide a useful starting point for this requirement. We provided Education, OSTP, Agriculture, Defense, Energy, EPA, NASA, NIH, and NSF with a draft of this report for their review and comment. Education agreed with our recommendation to post the results of the research it has funded on its Web site, stating that the department is currently exploring how to best implement the Education Sciences Reform Act’s provisions while not discouraging grantees from having their work published in scientific journals. (See appendix IV for Education’s written comments.) We met with OSTP officials, including the Associate Director for Science, who agreed with the thrust of our recommendation that the National Science and Technology Council coordinate the development of uniform federal requirements to identify and resolve financial conflicts of interest. However, the OSTP officials noted that recent experience in developing a common rule for research misconduct has demonstrated that the process for reaching consensus among federal agencies can be difficult and prolonged. We continue to believe that federal agencies should develop a single, uniform requirement for financial conflicts of interest. Through their experiences in implementing standards since 1995, NIH and NSF can provide important insights into the benefits and costs of alternative approaches in areas where their requirements differ. The Deputy Administrator for Extramural Programs within Agriculture’s Cooperative State Research, Education, and Extension Service stated, in oral comments, that the Service agreed with our recommendation and will, where appropriate, implement financial conflict of interest standards similar to those of NIH and NSF. Defense, Energy, EPA, NASA, NIH, and NSF agreed with the factual presentation of the report. (See app. V for NASA’s written comments, and app. VI for NIH’s written comments.) Several agencies also provided specific comments to improve the report’s technical accuracy, which we incorporated as appropriate. To assess the actions that federal agencies have taken to ensure the public’s access to authoritative and unbiased scientific research at universities, we examined the policies and procedures of the eight federal agencies that primarily fund university research—Agriculture, Defense, Education, Energy, EPA, NASA, NIH, and NSF. Specifically, we performed the following audit steps: To assess agencies’ actions to ensure that the results of the university research they fund are made available to the public, we reviewed each agency’s policies and procedures for disseminating research results and interviewed agency officials. We also accessed the final technical reports for several university grant projects from the Web sites of the five agencies that post research results. To assess agencies’ actions to ensure that universities implement policies for identifying and managing possible financial conflicts of interest, we examined whether each agency has regulations or policies requiring that universities identify and manage possible financial conflicts of interest. We also interviewed cognizant officials about their procedures for ensuring that universities are implementing financial conflict of interest policies. We did not examine the extent to which agencies have taken additional actions to protect against financial conflicts of interest for research involving human subjects, a topic examined in a November 2001 GAO report. To assess agencies’ actions to implement the Shelby Amendment, we examined the 1999 legislation; OMB’s revisions to Circular A-110; and the actions each agency has taken to implement the circular’s revisions. We also discussed these actions with cognizant agency officials, asked them whether they had received any FOIA requests that cited the Shelby Amendment, and, if so, asked them to provide information about each such request. We then reviewed the agency’s disposition of these FOIA requests. In addition to our review of federal agencies’ actions, we conducted a Web- based survey of the 200 universities and colleges that received the most federal research funding in fiscal year 2000. The survey contained 42 questions that asked about (1) their policies and procedures for ensuring that federally funded research results are made available to the public, (2) their views of the advantages and disadvantages of posting a grant’s final technical report to the agency’s Web site, (3) their conflict of interest and financial disclosure policies, (4) any FOIA requests federal agencies had received that asked for access to research data, and (5) data on their research funding and technology transfer activities. We pretested the content and format of the questionnaire with research office administrators at the Georgia Institute of Technology, Emory University, Washington University, the University of Missouri, the University of Colorado, the Colorado School of Mines, George Washington University, and the University of Maryland. During the pretest, we asked the administrators to determine whether the survey questions were clear, the terms used were precise, and the questions were unbiased. We also assessed the usability of the Web-based format. We made changes to the content and format of the final questionnaire based on pretest results. We received responses from 171 of the 200 universities surveyed, for a response rate of 86 percent. Respondents included 44 of the 50 universities that received the most federal funding in fiscal year 2000. We performed analyses to identify inconsistencies in the data and resolved them. The universities’ aggregated responses are available at http://www.gao.gov/special.pubs/gao-04-223sp. We conducted our review from August 2002 through September 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate House and Senate Committees, the Director of OSTP, the Secretary of Agriculture, the Secretary of Defense, the Secretary of Education, the Secretary of Energy, the Administrator of EPA, the Administrator of NASA, the Director of NIH, the Director of NSF, and the Director of OMB. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Vondalee Hunt, Ulana Bihun, Donald Pless, and Lynn Musser. The Omnibus Consolidated and Emergency Supplemental Appropriations Act, 1999, (Pub. L. No. 105-277) required the Director of the Office of Management and Budget (OMB) to amend Circular A-110 by incorporating a provision known as the Shelby Amendment. Among other things, the Shelby Amendment requires that (1) federal awarding agencies ensure that all data produced under an award will be made available to the public through the procedures established under the Freedom of Information Act (FOIA) and (2) if the agency obtaining the data does so solely at the request of a private party, the agency may charge a reasonable user fee equaling the incremental cost of obtaining the data. The Shelby Amendment grew out of a controversy that arose over the Environmental Protection Agency’s (EPA) proposal to tighten Clean Air Act standards for small airborne particulates in 1997. EPA’s proposed rule cited the published results of a 30- year epidemiological study funded by the National Institutes of Health (NIH) and conducted by Harvard University. Various industry groups that opposed EPA’s proposed regulation asked to review original data of the study. However, Harvard denied the requests, citing both confidentiality agreements with human subjects and the volume of data accumulated. On November 6, 1999, OMB published revisions to Circular A-110 in the Federal Register in response to the Shelby Amendment. Under the revision, a subject institution must provide the research data to the funding agency in response to a FOIA request if a federal agency has used the published research findings in developing an agency action that has the force and effect of law. In March 2000, 15 federal agencies published an interim final rule in the Federal Register that codified the OMB Circular A-110 revision. These agencies included Agriculture, Defense, Energy, Education, EPA, the National Aeronautics and Space Administration (NASA), and NIH. National Science Foundation (NSF) officials told us that NSF incorporated the revision by reference to OMB Circular A-110 in its grant agreements. Only NIH and EPA have received FOIA requests citing the Shelby Amendment. In reviewing the requests, both agencies determined that the requests did not meet the OMB Circular A-110 criteria. (See table 3.) Of the 40 requests received by NIH, 20 requested copies of either funded grant applications or contract records, not research data; 9 requested data generated from grants funded prior to the effective date of the NIH regulation implementing the Shelby Amendment; and 4 were withdrawn. NIH officials told us that NIH determined that the remaining seven requests were not applicable to the Shelby Amendment; however, information on the basis for this decision was unavailable because NIH had destroyed the FOIA files 2 years after its final response, in accordance with the National Archives and Records Administration’s (NARA) records retention schedule. EPA denied both requests it received because the requested data were generated by projects funded prior to the effective date of its regulation implementing the revision to OMB Circular A-110. More recently, OMB published a proposed bulletin and guidelines to ensure that agencies conduct peer reviews of the most important scientific and technical information relevant to regulatory policies that they disseminate to the public, and that the peer reviews are reliable, independent, and transparent. The guidance would supplement the requirements that many agencies have for peer review of “significant regulatory information,” which is scientific or technical information that qualifies as “influential” under OMB’s information quality guidelines and is relevant to regulatory policies. Specifically, the proposed guidelines state that, to the extent permitted by law, an agency shall have an appropriate and scientifically– rigorous peer review conducted on all significant regulatory information that the agency intends to disseminate. In addition, the proposed guidelines state that, to the extent permitted by law, an agency shall have formal, independent, external peer review conducted for so-called “especially significant regulatory information” which would apply to significant regulatory information if (1) the agency intends to disseminate the information in support of a major regulatory action, (2) the dissemination of the information could otherwise have a clear and substantial impact on important public policies or important private sector decisions with a possible impact of more than $100 million in any year, or (3) the Administrator of the Office of Information and Regulatory Affairs determines that the information is of significant interagency interest or is relevant to an administration policy priority. Among the 171 respondents to our survey, 155 universities reported that they, or their affiliates, have the option to accept equity as a means of payment for licensed technology. As shown in figure 1, since the enactment of the Bayh-Dole Act in December 1980, these universities have increasingly begun receiving equity in start-up companies in lieu of receiving license fees and royalties. Prior to the act, only 10 universities accepted equity in start-up companies. As of March 2003, 123 universities reported that they held equity in at least one start-up company, and 44 of these universities reported that they held equity in at least 10 start-up companies. The Massachusetts Institute of Technology held equity in 116 start-up companies at that time. Furthermore, 93 universities reported that they held, on average, less than 10 percent of the start-up companies’ equity, and 31 universities reported that they held, on average, 10 percent or more of the start-up companies’ equity. While 16 universities limit equity ownership to at most 10 percent, 116 universities reported that their institutional policy does not restrict the percentage of equity ownership they can hold in a start-up company. On July 11, 1995, the Department of Health and Human Services, which includes NIH, promulgated regulations and NSF revised its Investigator Financial Disclosure Policy to establish consistent requirements for universities and other grantees, with certain exceptions, to identify and manage real or apparent financial conflicts of interest. The stated purpose of these requirements is to ensure a reasonable expectation that the design, conduct, and reporting of research will be unbiased by any conflicting financial interest of the investigator. The effective date of these standards was October 1, 1995. Both NIH and NSF define a “significant financial interest” as anything of monetary value with the following exceptions: salaries, royalties, and remuneration from the applicant institution; any ownership interest in the institution, if the institution is an applicant under the Small Business Innovation Research program; income from seminars, lectures, teaching engagements, and service on advisory committees or review panels; an equity interest that—when aggregated for the investigator, spouse, and dependent children—does not exceed $10,000 and does not represent more than 5 percent ownership interest in a single entity; or salary, royalties, or other payments that—when aggregated for the investigator, spouse and dependent children—do not exceed $10,000 over the next 12 months. The NIH regulations (42 C.F.R. Part 50 and 45 C.F.R. Part 94) require that each institution, except Phase I applicants for the Small Business Innovation Research program, takes the following actions: Maintains a written, enforced policy on conflict of interest complying with the regulations, and inform investigators of the policy. The institution must take reasonable steps to ensure that subgrantees comply with its policy. Designates an institutional official who will review financial disclosure statements. Requires that each investigator submit to the institutional official, by the time the application is submitted for funding, a listing of significant financial interests that would reasonably be affected by the research. Provides guidelines for designated officials to identify conflicts of interest and take necessary action to manage, reduce, or eliminate those conflicts. Under the regulations, a conflict of interest exists when the designated official reasonably determines that a significant financial interest could directly and significantly affect the design, conduct, or reporting of the funded research. Maintains records for 3 years after the date of the submission of the final report of expenditures. Establishes adequate enforcement mechanisms and provide for appropriate sanctions. Certifies in each application for funding that the institution has an administrative process to manage conflicts of interest and that, prior to any expenditure of funds, the institution will report the existence of a conflict and assure that it is being managed, reduced, or eliminated. If an investigator fails to comply with the institution’s policies and has, thereby, biased the research, the institution must report the noncompliance immediately to NIH and inform NIH of the action that has been, or will be, taken. If this failure occurs in a project whose purpose is to evaluate the safety or effectiveness of a drug, medical device, or treatment, the institution must require that it be disclosed in each public presentation of the results of the research. NSF’s policies were developed in close conjunction with the NIH regulations but differ in the following significant respects: NSF has no conflict of interest requirement governing subgrantees. NSF exempts all entities with less than 50 employees from its standard. NSF requires that records be retained for 3 years after the termination of the award instead of 3 years after the last financial statement has been submitted. NSF requires that the institution provide notification of a conflict of interest only if the institution is unable to resolve the conflict. NSF permits research to proceed, in spite of disclosed conflicts, if the review determines that restrictions would be ineffective or that the benefits of proceeding outweigh the consequences of any negative impact. NIH does not address this issue in its policy. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In fiscal year 2001, federal agencies provided $19 billion for university research, a vital part of the nation's research and development effort. GAO was asked to examine federal agencies' actions to ensure that (1) the results of the university research grants they fund are made available to the public and (2) universities receiving such grants implement policies for identifying and managing possible financial conflicts of interest. GAO reviewed the actions of eight federal agencies and conducted a Web-based survey of 200 leading research universities (refer to GAO-04-223SP). GAO also met with officials in the Office of Science and Technology Policy (OSTP) to discuss the National Science and Technology Council's role in coordinating federal science policy. Each of the eight federal agencies GAO examined relies on university scientists who receive federally funded research grants to make the results available to the public. Although university scientists customarily seek to publish their research results in peer-reviewed journals, agencies cannot require such publication as a condition for funding because it is impossible to ensure in advance that the results will be accepted for publication. Agencies do, however, explicitly encourage funding recipients to make results public. The Departments of Agriculture, Defense, and Energy; the Environmental Protection Agency (EPA); and the National Aeronautics and Space Administration (NASA) also disseminate the results of their funded research by posting them on their Web sites. Officials from these agencies said that posting the results is an effective way to share information among scientists, as well as with the public. In contrast, the National Institutes of Health (NIH) and the National Science Foundation (NSF) do not post research results on their Web sites. According to NIH officials, the risk associated with posting researchers' final reports before they have been validated by peer review is too great in the biomedical field. The Department of Education is considering how best to widely disseminate the results of research it funds. NIH and NSF are the only federal agencies that require universities to implement policies for identifying and managing possible financial conflicts of interest for the research they fund. The other six agencies do not have financial conflict of interest standards for university research grants. Of the 171 universities that responded to the GAO survey, 148 (87 percent) reported that all of their federally funded research is covered by financial conflict of interest policies that are consistent with either NIH's or NSF's standards. However, 17 universities reported that they do not extend either agency's requirements to cover research grants from other federal agencies. Unless federal agencies uniformly require that universities implement conflict of interest policies, the government cannot properly safeguard against financial conflicts of interest that might bias federally funded research.
Despite U.S. and Iraqi efforts to shift a greater share of the country’s defense to the Iraqi security forces, the security situation continues to deteriorate, impeding management of the more than $29 billion obligated for reconstruction and stabilization efforts. The desired end-state for U.S.- stabilization operations in Iraq is a peaceful, united, stable, and secure Iraq, well integrated into the international community, and a full partner in the global war on terrorism. To achieve this end-state, the United States is, among other things, (1) training and equipping Iraqi security forces that will be capable of leading counterinsurgency operations, and (2) transferring security responsibilities to Iraqi forces and the Iraqi government as capabilities improve. In October 2003, the multinational force outlined a multistep plan for transferring security missions to Iraqi security forces. The security transition plan had the objective of neutralizing Iraq’s insurgency while developing Iraqi forces capable of securing their country, allowing a gradual decrease in the number of coalition forces. From the fall of 2003 through April 2006, MNF-I revised its security transition plan several times because the Iraqi government and security forces proved incapable of assuming security responsibilities within the time frames envisioned by the plans. For example, in April 2004, Iraqi police and military units performed poorly during an escalation of insurgent attacks against the coalition. Many Iraqi security forces around the country collapsed, with some units abandoning their posts and responsibilities and in some cases assisting the insurgency. State and DOD have reported some progress in implementing the security transition plan. The State Department has reported that the number of army and police forces that have been trained and equipped increased from about 174,000 in July 2005 to about 323,000 in December 2006. DOD and State also have reported progress in transferring security responsibilities to Iraqi army units and provincial governments. The number of Iraqi army battalions in the lead for counterinsurgency operations increased from 21 in March 2005 to 89 in October 2006. In addition, 7 Iraqi army division headquarters and 30 brigade headquarters had assumed the lead by December 2006. Moreover, by mid-December 2006, three provincial governments—Muthanna, Dhi Qar, and Najaf—had taken over security responsibilities for their provinces. However, the reported progress in transferring securing responsibilities to Iraq has not led to improved security conditions (see fig.1). Since June 2003, overall security conditions in Iraq have deteriorated and grown more complex, as evidenced by the increased numbers of attacks and the Sunni- Shi’a sectarian strife that followed the February 2006 bombing of the Golden Mosque in Samarra. Enemy-initiated attacks against the coalition and its Iraqi partners continued to increase through October 2006 and remain high. The average total attacks per day has increased, rising from about 70 per day in January 2006 to a record high of about 180 per day in October 2006. These attacks have increased around major religious and political events, including Ramadan and the elections. Coalition forces are still the primary target of attacks, but the number of attacks on Iraqi security forces and civilians also has increased since 2003. In October 2006, the State Department reported that the recent increase in violence has hindered efforts to engage with Iraqi partners and illustrates the difficulty in making political and economic progress in the absence of adequate security conditions. Sectarian and militia influences in the Iraqi security forces contribute to the higher levels of violence. According to portions of the January 2007 National Intelligence Estimate on Iraq that were declassified, sectarian divisions have eroded the dependability of many Iraqi units, and a number of Iraqi units have refused to serve outside the areas where they were recruited. According to an August 2006 DOD report, sectarian lines among the Iraqi security forces are drawn geographically, with Sunni, Shi’a, or Kurdish soldiers serving primarily in units located in areas familiar to their group. Further, according to the report, commanders at the battalion level tend to command only soldiers of their own sectarian or regional background. Moreover, in November 2006, the State Department reported that corruption and infiltration by militias and others loyal to parties other than the Iraqi government have resulted in the Iraqi security forces being part of the problem in many areas instead of the solution. Because of the poor security conditions, the United States has not been able to draw down the number of U.S. forces in Iraq as early as planned. For example, after the increase in violence and collapse of the Iraqi security forces during the spring of 2004, DOD decided to maintain a force level of about 138,000 troops until at least the end of 2005, rather than reducing the number of troops to 105,000 by May 2004, as had been announced the prior fall. DOD reversed a decision to significantly reduce the U.S. force level during the spring of 2006 because Iraqi and coalition forces could not contain the rapidly escalating violence that occurred in the summer of 2006. Our work has identified weaknesses in the $15.4 billion program to develop Iraqi security forces. Although unit-level transition readiness assessments provide detailed information on Iraqi security force capabilities, the aggregate reports that DOD and State provide to Congress do not provide the information needed to determine the complete capabilities of the forces. Consequently, Congress will need additional information to assess the department’s supplemental request for $3.8 billion to train and equip Iraqi security forces. GAO has made repeated attempts, without success, to obtain U.S. assessments of Iraqi forces. These data are essential for Congress to make an independent assessment of Iraqi forces’ capabilities, needs, and results. Moreover, DOD may be unable to fully account for weapons received by the Iraqi security forces and has yet to clarify which accountability requirements it chose to apply to the program. MNF-I uses the TRA system to determine when units of the Iraqi security forces are capable of assuming the lead for counterinsurgency operations in specific geographic areas. The TRA is a joint assessment, prepared monthly by the unit’s coalition commander and Iraqi commander. According to MNF-I guidance, the purpose of the TRA system is to provide commanders with a method to consistently evaluate units; it also helps to identify factors hindering unit progress, determine resource shortfalls, and make resource allocation decisions. Iraqi army TRA reports contain capabilities ratings in the areas of personnel, command and control, equipment, sustainment/logistics, training, and leadership. Commanders use the TRA results and their professional judgment to determine a unit’s overall readiness level. Each Iraqi army unit is assigned a readiness level of 1 through 4, with 1 the highest level a unit can achieve. DOD and State reports provide some information on the development of Iraqi security forces, but they do not provide detailed information on the specific capabilities that affect the readiness levels of individual units. For example, DOD and State provide Congress with weekly and quarterly reports on the progress made in developing capable Iraqi security forces and transferring security responsibilities to the Iraqi army and the Iraqi government. This information is provided in two key areas: (1) the number of trained and equipped forces, and (2) the number of Iraqi army units and provincial governments that have assumed responsibility for security of specific geographic areas. The State Department reports that the number of trained and equipped Iraqi security forces has increased from about 174,000 in July 2005 to about 323,000 in December 2006. However, these numbers do not provide a complete picture of the Iraqi security forces’ capabilities in part because they may overstate the number of forces on duty. For example, Ministry of Interior data include police who are absent without leave, but Ministry of Defense data exclude absent personnel. In addition, poor reporting by the Ministry of Interior makes it difficult to determine how many of the coalition-trained police the ministry still employs or what percentage of the 180,000 police believed to be on the payroll are coalition trained and equipped. Moreover, the numbers do not give detailed information on the status of equipment, personnel, training, or leadership. We previously reported that we were working with DOD to obtain the unit- level TRA reports because they would be useful in more fully informing Congress about the capabilities and needs of Iraq’s security forces and in indicating how accurately DOD reports reflect the forces’ capabilities. According to MNF-I’s Deputy Chief of Staff for Strategic Effects, the best measure of the capabilities of Iraqi units and improvements in the security situation comes from commanders on the ground at the lowest level. Although unit-level TRA reports provide more detailed information on Iraqi security forces’ capabilities, DOD had not provided GAO with these unit-level reports as of February 2007. DOD routinely provides GAO access to the readiness levels of U.S. forces. Additionally, DOD and MNF-I may be unable to fully account for weapons issued to the Iraqi security forces, and DOD has not yet clarified what accountability requirements apply to the program. According to our preliminary analysis, as of January 2007, DOD and MNF-I may not be able to account for Iraqi security forces’ receipt of about 90,000 rifles and 80,000 pistols that were reported as issued before early October 2005. Additionally, it is unclear at this time what accountability measures DOD has chosen to apply to the train-and-equip program for Iraq. As part of our ongoing work, we have asked DOD to clarify whether MNF-I and Multi- National Security Transition Command-Iraq (MNSTC-I) must follow accountability measures specified in DOD regulations, or whether DOD has established other accountability measures. For example, DOD officials expressed differing opinions on whether the DOD regulation on the Small Arms Serialization Program, which requires the entry of small arms serial numbers into a DOD-maintained registry, applies to U.S.-funded equipment procured for Iraqi security forces. While it is unclear which regulations DOD has chosen to apply, beginning in 2004, MNF-I established requirements to control and account for equipment issued to the Iraqi security forces by issuing a series of orders that outline procedures for its subordinate commands. Although MNF-I took initial steps to establish property accountability procedures, according to MNF-I officials limitations such as the initial lack of a fully operational equipment distribution network, staffing weaknesses, and the operational demands of equipping the Iraqi forces during war hindered its ability to fully execute critical tasks outlined in the property accountability orders. While DOD relies heavily on contractors for reconstruction projects and support to its forces in Iraq, it faces several management and oversight challenges. First, military commanders and senior DOD officials do not have visibility over contractors, which prevents DOD from knowing the extent to which it is relying on contractors for support in Iraq. Second, DOD lacks clear and comprehensive guidance and leadership for managing and overseeing contractors. Third, key contracting issues— including unclear requirements and not reaching agreement on key terms and conditions in a timely manner—have prevented DOD from achieving successful acquisition outcomes. Fourth, DOD does not have a sufficient number of oversight personnel to ensure that the contracts that are in place are carried out efficiently and according to the contract requirements. Finally, military commanders and contract oversight personnel do not receive sufficient training to effectively manage contracts and contractors in Iraq. DOD continues to lack the capability to provide senior leaders and military commanders with information on the totality of contractor support to deployed forces. Without such visibility, senior leaders and military commanders cannot develop a complete picture of the extent to which they rely on contractors to support their operations. We first reported the need for better visibility in 2002 during a review of the costs associated with U.S. operations in the Balkans. At that time, we reported that DOD was unaware of (1) the number of contractors operating in the Balkans, (2) the tasks those contractors were contracted to do, and (3) the government’s obligations to those contractors under the contracts. We noted a similar situation in 2003 in our report on DOD’s use of contractors to support deployed forces in Southwest Asia and Kosovo. At that time, we reported that, although most contract oversight personnel had visibility over the individual contracts for which they were directly responsible, visibility of all contractor support at a specific location was practically nonexistent at the combatant commands, component commands, and deployed locations we visited. As a result, commanders at deployed locations had limited visibility and understanding of all contractor activity supporting their operations and frequently had no easy way to get answers to questions about contractor support. This lack of visibility inhibited the ability of commanders to resolve issues associated with contractor support such as force protection issues and the provision of support to the contractor personnel. Most recently, in our December 2006 review of DOD’s use of contractors in Iraq, we found that DOD’s limited visibility unnecessarily increased contracting costs to the government and introduced unnecessary risk. Without visibility over where contractors are deployed and what government support they are entitled to, costs to the government may increase. For example, at a contractor accountability task force meeting we attended in 2006, an Army Material Command official stated that an Army official estimated that about $43 million is lost each year on free meals provided to contractor employees at deployed locations who also receive a per diem food allowance. Also, when senior military leaders began to develop a base consolidation plan, officials were unable to determine how many contractors were deployed and therefore ran the risk of over- or under-building the capacity of the consolidated bases. DOD’s October 2005 guidance on contractor support to deployed forces included a requirement that the department develop or designate a joint database to maintain by-name accountability of contractors deploying with the force and a summary of the services or capabilities they provide. The Army has taken the lead in this effort, and DOD recently designated a database intended to provide improved visibility over contractors deployed to support the military in Iraq, Afghanistan, and elsewhere. DOD provided additional information after we briefed the House Appropriations Committee’s Subcommittee on Defense. According to DOD, in January 2007, the department designated the Army’s Synchronized Predeployment & Operational Tracker (SPOT) as the department-wide database to maintain by-name accountability of all contractors deploying with the force. According to DOD the SPOT database includes approximately 50,000 contractor names. Additionally in December 2006, the Defense Federal Acquisition Regulation Supplement was amended to require the use of the SPOT database by contractors supporting deployed forces. Since the mid-1990s, our reports have highlighted the need for clear and comprehensive guidance for managing and overseeing the use of contractors who support deployed forces. For example, in assessing the Logistics Civil Augmentation Program (LOGCAP) implementation during the Bosnian peacekeeping mission in 1997, we identified weaknesses in the available doctrine on how to manage contractor resources, including how to integrate contractors with military units and what type of management and oversight structure to establish. We identified similar weaknesses when we began reviewing DOD’s use of contractors in Iraq. For example, in 2003, we reported that guidance and other oversight mechanisms varied widely at the DOD, combatant-command, and service levels, making it difficult to manage contractors effectively. Similarly, in our 2005 report on private security contractors in Iraq, we noted that DOD had not issued any guidance to units deploying to Iraq on how to work with or coordinate efforts with private security contractors. Our prior work has shown that it is important for organizations to provide clear and complete guidance to those involved in program implementation. In our view, establishing baseline policies for managing and overseeing contractors would help ensure the efficient use of contractors in places such as Iraq. DOD took a noteworthy step to address some of these issues when it issued new guidance in 2005 on the use of contractors who support deployed forces. However, as our December 2006 report made clear, DOD’s guidance does not address a number of problems we have repeatedly raised—such as the need to provide adequate contract oversight personnel, to collect and share lessons learned on the use of contractors supporting deployed forces, or to provide DOD commanders and contract oversight personnel with training on the use of contractors overseas before deployment. After our January 30, 2007 briefing to the House Appropriations Committee’s Subcommittee on Defense, DOD provided additional information on a new publication it was developing. The department noted that it was developing a joint publication entitled “Contracting and Contractor Management in Joint Operations,” which it expects to be distributed in May 2007. In addition to the lack of clear and comprehensive guidance for managing contract personnel, we have issued several reports highlighting the need for DOD components to comply with departmental guidance on the use of contractors. For example, in our June 2003 report, we noted that DOD components were not complying with a long-standing requirement to identify essential services provided by contractors and develop backup plans to ensure the continuation of those services during contingency operations should contractors become unavailable to provide those essential services. We believe that risk is inherent when relying on contractors to support deployed forces, and without a clear understanding of the potential consequences of not having the essential service available, the risks associated with the mission increase. In other reports, we highlighted our concerns over DOD’s planning for the use of contractor support in Iraq—including the need to comply with guidance to identify operational requirements early in the planning process. When contractors are involved in planning efforts early and given adequate time to plan and prepare to accomplish their assigned missions, the quality of the contractor’s services improves and contract costs may be lowered. DOD’s October 2005 guidance on the use of contractor support to deployed forces went a long way to consolidate existing policy and provide guidance on a wide range of contractor issues. However, as of December 2006, we found little evidence that DOD components were implementing that guidance, in part because no individual within DOD was responsible for reviewing DOD and service efforts to ensure that the guidance was being consistently implemented. We have made a number of recommendations for DOD to take steps to establish clear leadership and accountability for contractor support issues. For example, in our 2005 report on LOGCAP, we recommended that DOD designate a LOGCAP coordinator with the authority to participate in deliberations and advocate for the most effective and efficient use of the LOGCAP contract. Similarly, in our second comprehensive review of contractors on the battlefield in 2006, we recommended that DOD appoint a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics—at a sufficiently senior level and with the appropriate resources—to lead DOD’s efforts to improve its contract management and oversight. DOD generally agreed with these recommendations. In October 2006, DOD established the office of the Assistant Deputy Under Secretary of Defense for Program Support to serve as the office of primary responsibility for contractor support issues. However, as we noted in our December 2006 report, it is not clear to what extent this office would serve as the focal point dedicated to leading DOD’s efforts to improve its contract management and oversight. DOD needs to address long-standing contracting issues related to acquisition outcomes. Two of the key factors that promote successful acquisition outcomes are (1) clearly defined requirements and (2) timely agreement on a contract’s key terms and conditions, such as the scope and cost. The absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor acquisition outcomes. Further, in Iraq, DOD’s contracts were often cost- reimbursable contracts, which allow the contractor to be reimbursed for reasonable, allowable, and allocable costs to the extent prescribed in the contracts. When cost- reimbursable contracts such as those used in the reconstruction of Iraq and the support contracts for deployed forces (e.g. LOGCAP) are not effectively managed and given sufficient oversight, the government’s risk is likely to increase. For example, we have reported that poorly written statements of work, which included vague or ill-defined requirements, can lead the contractor to take excessive steps to ensure customer satisfaction and result in additional costs to the government. Similarly, we have reported that contract customers need to conduct periodic reviews of services provided under cost-reimbursable contracts to ensure that services provided are supplied at an appropriate level. Without such a review, the government is at risk of paying for services it no longer needs. For example, the command in Iraq lowered the cost of the LOGCAP contract by $108 million dollars by reducing services and eliminating unneeded dining facilities and laundries. A prerequisite to achieving good acquisition outcomes is a match between well-defined requirements and available resources. U.S. reconstruction goals were based on assumptions about the money and time needed, which have proven unfounded. U.S. funding was not meant to rebuild Iraq’s entire infrastructure but rather to lay the groundwork for a longer- term reconstruction effort that anticipated significant assistance from international donors. To provide that foundation, the Coalition Provisional Authority (CPA) allocated $18.4 billion in fiscal year 2004 reconstruction funds among various projects in each reconstruction sector, such as oil, electricity, and water and sanitation. Almost immediately after the CPA dissolved, the Department of State reprioritized funding for projects that would not begin until mid to late 2005 and used those funds to target high-impact projects. By July 2005, the State Department had conducted a series of funding reallocations to address new priorities, including increasing support for security and law enforcement efforts and oil infrastructure enhancements. One of the consequences of these reallocations was to reduce funding for the water and sanitation sector by about 44 percent, from $4.6 billion to $2.6 billion. One reallocation of $1.9 billion in September 2004 led the Project and Contracting Office to cancel some projects, most of which were planned to start in mid-2005. Changes, even those made for good reasons, make it more difficult to manage individual projects to successful outcomes. Further, such changes invariably have a cascading effect on individual contracts. To produce desired outcomes within available funding and required time frames, DOD and its contractors need to have a clear understanding of reconstruction objectives and how they translate into the terms and conditions of a contract: what goods or services are needed, when they are needed, the level of performance or quality desired, and what the cost will be. When such requirements were not clear, DOD often entered into contract arrangements on reconstruction efforts that posed additional risks, such as authorizing contractors to begin work before key terms, conditions such as the work to be performed, and projected costs were fully defined. For example, we found that, as of March 2004, about $1.8 billion had been obligated on reconstruction contract actions without DOD and the contractors reaching an agreement on the final scope and cost of the work. In September 2006, we issued a report on how DOD addressed issues raised by the Defense Contract Audit Agency (DCAA) in its audits of Iraq- related contract costs. We noted that, in cases where DOD authorized contractors to begin work before reaching agreement on the scope or price, DOD contracting officials were less likely to remove costs from a contractor’s proposal when DCAA raised questions about them if the contractor had already incurred those costs. For example, of the 18 audit reports we reviewed, DCAA issued 11 reports on contract actions where more than 180 days had elapsed between the beginning of the period of performance to final negotiations. For nine of these audits, the period of performance DOD initially authorized for each contract action concluded before final negotiations took place. In one case, DCAA questioned $84 million in its audit of a task order proposal for an oil mission. In this case, the contractor did not submit a proposal to DOD until a year after the work was authorized, and DOD and the contractor did not negotiate the final terms of the task order until more than a year after the contractor had completed the work. In the final negotiation documentation, the DOD contracting official stated that the payment of incurred costs is required for cost-type contracts, if there are no unusual circumstances. In contrast, in the few audit reports we reviewed in which the government negotiated the terms before starting work, we found that the portion of questioned costs removed from the proposal was substantial. An unstable contracting environment—when contract requirements are in a state of flux—requires greater attention to oversight, which in turn relies on a capable government workforce. Having personnel who are trained to conduct oversight and held accountable for their oversight responsibilities is essential for effective oversight of contractors. If surveillance is not conducted, not sufficient, or not well documented, DOD is at risk of being unable to identify and correct poor contractor performance in a timely manner and potentially paying too much for the services it receives. On multiple occasions, we and others have reported on deficiencies in DOD’s oversight. For example, our June 2004 report found that early contract administration challenges were caused, in part, by the lack of personnel. In addition, the Special Inspector General noted that, with regard to the CPA, gaps existed in the experience levels of those hired and the quality and depth of their experiences relative to their assigned jobs. Similarly, in 2004, an interagency assessment team found that the number of contracting personnel was insufficient to handle the increased workload. In part, the CPA’s decision to award seven contracts in early 2004 to help better coordinate and manage the fiscal year 2004 reconstruction efforts recognized this shortfall. As a result, DOD is in the position of relying on contractors to help manage and oversee the work of other contractors. More recently, in December 2006, we reported that DOD does not have sufficient numbers of contractor oversight personnel at deployed locations, which limits its ability to obtain a reasonable assurance that contractors are meeting contract requirements efficiently and effectively. Although we could find no DOD guidelines on the appropriate number of personnel needed to oversee and manage DOD contracts at a deployed location, several contract oversight personnel stated that DOD does not have adequate personnel at deployed locations to effectively oversee and manage contractors. For example, an Army official acknowledged that the Army is struggling to find the capacity and expertise to provide the contracting support needed in Iraq. In addition, officials responsible for contracting with MNF-I stated that they did not have enough contract oversight personnel and quality assurance representatives to allow MNF-I to reduce the Army’s use of the LOGCAP contract by awarding more sustainment contracts for base operations support in Iraq. Furthermore, a LOGCAP program official noted that, if adequate staffing had been in place, the Army could have realized substantial savings on the LOGCAP contract through more effective reviews of new requirements. Finally, the contracting officer’s representative for an intelligence support contract in Iraq stated that he was also unable to visit all of the locations that he was responsible for overseeing. At the locations he did visit, he was able to work with the contractor to improve the project’s efficiency. However, because he was not able to visit all of the locations at which the contractor provided services in Iraq, he was unable to duplicate those efficiencies at all the locations in Iraq where the contractor provided support. The inability of contract oversight personnel to visit all the locations they are responsible for can also create problems for units that face difficulties resolving contractor performance issues at those locations. For example, officials from a brigade support battalion stated that they had several concerns with the performance of a contractor that provided maintenance for the brigade’s mine-clearing equipment. These concerns included delays in obtaining spare parts and a disagreement over the contractor’s obligation to provide support in more austere locations in Iraq. According to the officials, their efforts to resolve these problems in a timely manner were hindered because the contracting officer’s representative was located in Baghdad while the unit was stationed in western Iraq. In other instances, some contract oversight personnel may not even reside within the theater of operations. For example, we found the Defense Contract Management Agency’s (DCMA) legal personnel responsible for LOGCAP in Iraq were stationed in Germany, while other LOGCAP contract oversight personnel were stationed in the United States. According to a senior DCMA official in Iraq, relying on support from contract oversight personnel outside the theater of operations makes resolving contractor performance issues more difficult for military commanders in Iraq, who are operating under the demands and higher operational tempo of a contingency operation in a deployed location. Since the mid-1990s, our work has shown the need for better pre- deployment training for military commanders and contract oversight personnel on the use of contractor support. Training is essential for military commanders because of their responsibility for identifying and validating requirements to be addressed by the contractor. In addition, commanders are responsible for evaluating the contractor’s performance and ensuring the contract is used economically and efficiently. Similarly, training is essential for DOD contract oversight personnel who monitor the contractor’s performance for the contracting officer. As we reported in 2003, military commanders and contract management and oversight personnel we met in the Balkans and throughout Southwest Asia frequently cited the need for better preparatory training. Additionally, in our 2004 review of logistics support contracts, we reported that many individuals using logistics support contracts such as LOGCAP were unaware that they had any contract management or oversight roles. Army customers stated that they knew nothing about LOGCAP before their deployment and that they had received no pre-deployment training regarding their roles and responsibilities in ensuring that the contract was used economically and efficiently. In our December 2006 report, we noted that many officials responsible for contract management and oversight in Iraq stated that they received little or no training on the use of contractors prior to their deployment, which led to confusion over their roles and responsibilities. For example, in several instances, military commanders attempted to direct (or ran the risk of directing) a contractor to perform work outside the contract’s scope, even though commanders are not authorized to do so. Such cases can result in increased costs to the government. Over the years, we have made several recommendations to DOD intended to strengthen this training. Some of our recommendations were aimed at improving the training of military personnel on the use of contractor support at deployed locations, while others focused on training regarding specific contracts, such as LOGCAP. Our recommendations have sought to ensure that military personnel deploying overseas have a clear understanding of the role of contractors and the support the military provides to them. DOD has agreed with most of our recommendations. However, we continue to find little evidence that DOD has improved training for military personnel on the use of contractors prior to their deployment. DOD provided additional information after we briefed the House Appropriations Committee’s Subcommittee on Defense. DOD advised us that they had established a contingency contracting training program at the Defense Acquisition University. While this is a good first step, we would note that according to the course description, the course is intended for contracting professionals. As we noted, we believe that there is a need to provide training for those personnel who are not contracting professionals such as commanders and others who are likely to work with contractor employees on a daily bases, but are not contracting professionals. Since 2003, the United States has obligated about $29 billion to help Iraq rebuild its infrastructure and develop Iraqi security forces to stabilize the country. However, key goals have not been met and the Iraqi government has not sustained these efforts, in part because of the lack of management and human resource skills in Iraq’s key ministries. According to U.S. officials, the inability of the Iraqi government to spend its 2006 capital budget also increases the uncertainty that it can sustain the rebuilding effort. The United States has obligated about $14 billion to restore essential services such as oil, electricity, and water, and more than $15 billion to train, equip, and sustain Iraqi security forces. Reconstruction has focused on projects such as repairing oil facilities, increasing electricity generating capacity, and restoring water treatment plants. For example, the U.S. Army Corps of Engineers reported that it had completed 293 of 523 planned electrical projects, including the installation of 35 natural gas turbines in Iraqi power generation plants. Stabilization efforts have focused on MNF-I training and equipping approximately 323,000 Iraqi security forces. To help sustain these forces, MNF-I is assisting Iraq’s Ministries of Defense and Interior in funding and building logistics systems for the military and police. The military logistics system includes a national depot, regional logistics centers, and garrison support units. The draft logistics plan for the police called for a system of warehouses to perform maintenance on equipment and distribution centers to dispense supplies. The United States has spent billions of dollars rebuilding the infrastructure and developing Iraqi security forces. However, the Iraqi government has had difficulty operating and sustaining the aging oil infrastructure, maintaining the new and rehabilitated power generation facilities, and developing and sustaining the logistics systems for the Ministries of Defense and Interior. The coalition provides the critical support necessary for the ministries to carry out their security responsibilities. As of December 2006, neither ministry was self-sufficient in logistics, command and control, or intelligence. For example: Iraq’s oil production and exports have consistently fallen below their respective program goals. In 2006, oil production averaged 2.1 million barrels per day, compared with the U.S. goal of 3.0 million barrels per day. The Ministry of Oil has had difficulty operating and maintaining the refineries. According to U.S. officials, Iraq lacks qualified staff and expertise at the field, plant, and ministry level, as well as an effective inventory control system for spare parts. According to State, the Ministry of Oil will have difficulty maintaining future production levels unless it initiates an ambitious rehabilitation program. In addition, oil smuggling and theft of refined oil products have cost Iraq substantial resources. In 2006, electrical output reached 4,317 megawatts of peak generation per day, falling short of the U.S. goal of 6,000 megawatts. Prewar electrical output averaged 4,200 megawatts per day. Production also was outpaced by increasing demand, which has averaged about 8,210 megawatts per day. The Iraqi government has had difficulty sustaining the existing facilities. Problems include the lack of training, inadequate spare parts, and an ineffective asset management and parts inventory system. Moreover, plants are sometimes operated beyond their recommended limits, resulting in longer downtimes for maintenance. In addition, major transmission lines have been repeatedly sabotaged and repair workers have been intimidated by anti-Iraqi forces. As of December 2006, the coalition was providing significant levels of support to the Iraqi military because the Ministry of Defense could not fully supply its forces with adequate life support, fuel, uniforms, building supplies, ammunition, vehicle maintenance and spare parts, or medical supplies. In addition, the ministry was not able to run its communications networks on its own or independently acquire communications equipment. Furthermore, the Ministry will likely lack a comprehensive plan for its intelligence structure until December 2007. Although the coalition plans to begin turning over certain support functions to ministerial control in the spring of 2007, it is unlikely that the Ministry of Defense will achieve complete self-sufficiency in logistics, command and control, or intelligence before mid-2008. The Ministry of Interior also receives critical support from the coalition and is not self-sufficient in logistics, command and control, or intelligence. Because the ministry is unable to provide maintenance for vehicles of the national police, the coalition has let several contracts to train Iraqi mechanics, provide spare parts to contractors, and repair police vehicles. In addition, the ministry is not able to self-sufficiently operate or maintain its communications networks. Furthermore, the coalition estimates that, if the security environment in Baghdad improves, the ministry’s intelligence organization will be self-sufficient by mid-2008. However, if this self- sufficiency depends on improved security, there may be cause for concern, given that the average total attacks per day have increased, rising from about 70 per day in January 2006 to a record high of about 180 per day in October 2006. Although the coalition plans to begin turning over certain support functions to ministerial control in the spring of 2007, it is unlikely that the Ministry of Interior will achieve complete self-sufficiency in logistics, command and control, or intelligence before mid 2008. Iraqi government institutions are undeveloped and confront significant challenges in staffing a competent, non-partisan civil service; effectively fighting corruption; using modern technology; and managing resources effectively. Figure 2 provides an organizational chart of the Iraqi executive branch and ministries. The Iraqi civil service remains hampered by inadequately trained or unskilled staff whose political and sectarian loyalties jeopardize the ministries’ ability to provide basic services and build credibility among Iraqi citizens, according to U.S. government reports and international assessments. A U.S. report states that the government ministries and the associated budgets are used as sources of power for political parties with ministry positions staffed with party cronies as a reward for political loyalty. According to U.S. officials, patronage leads to staff instability as many are replaced when the government changes or a new minister is named. Some Iraqi ministries, including the Ministries of Interior, Agriculture, Health, Transportation, and Tourism, are led by ministers whose allegiance is to political parties hostile to U.S. goals. These ministers use their positions to pursue partisan agendas that conflict with the goal of building a government that represents all ethnic groups. U.S. officials have expressed reservations about working in some of these ministries, noting that the effectiveness of programs is hampered by the presence of unresponsive or anti-U.S. officials. Corruption in Iraq is reportedly widespread and also poses a major challenge to building an effective Iraqi government. Corruption jeopardizes future flows of needed international assistance and reportedly undermines the government’s ability to make effective use of current reconstruction assistance. According to U.S. government and World Bank reports, there are several reasons for corruption in Iraq. The reasons, among others, include (1) an ineffective banking system that leaves the government dependent on cash transactions; (2) nontransparent, obsolete ministry procurement systems; and (3) ineffective, inadequately resourced accountability institutions, such as the ministries’ inspectors general. GAO and the inspectors general are working with Iraq’s accountability organizations—the Board of Supreme Audit, Commission on Public Integrity, and inspectors general of the ministries—to strengthen their capabilities. The Iraqi ministries lack adequate information technology and have difficulty managing their resources, according to U.S. officials and an international assessment, further contributing to the corruption problem. For example, U.S. officials said that the Ministry of Interior relies on manual processes such as hand-written ledgers and a cash-based payroll system that has resulted in Iraqi police leaving their posts to deliver cash to their families. U.S. officials also estimated that 20 to 30 percent of the Ministry of Interior personnel are “ghost employees”— nonexistent staff paid salaries that are collected by other officials. Sound government budgeting practices can help determine the priorities of the new government, provide transparency on government operations, and help decision makers weigh competing demands for limited resources. However, unclear budgeting and procurement rules have affected Iraq’s efforts to spend capital budgets effectively and efficiently according, to U.S. officials. The inability to spend the money raises serious questions for the government, which has to demonstrate to skeptical citizens that it can improve basic services and make a difference in their daily lives. The U.S. government has launched a series of initiatives in conjunction with other donors to address this issue and improve the Iraqi government’s budget execution. As of August 2006, the government of Iraq had spent, on average, 8 percent of its annual capital goods budget and 14 percent of its annual capital projects budget. Some of the weakest spending occurs at the Ministry of Oil, which relies on damaged and outdated infrastructure to produce the oil that provides nearly all of the country’s revenues. The Ministry of Oil’s $3.5 billion 2006 capital project’s budget targeted key enhancements to the country’s oil production, distribution, and export facilities. However, as of August 2006, the ministry had spent less than 1 percent of these budgeted funds. Moreover, Interior and Defense had only spent about 11 and 1 percent, respectively, of their capital goods budget, which include funds for the purchase of weapons, ammunition, and vehicles, among other items. According to U.S. officials, Iraq lacks clearly defined and consistently applied budget and procurement rules needed for effective budget planning and implementation. The ministries have multiple rules and regulations promulgated under the former regime, the CPA, and the current government. The lack of procurement and budgeting rules creates opportunities for corruption and mismanagement. Table 1 provides further information on the Iraqi ministries efforts to spend their capital budgets. As I have discussed in my statement today, a number of conditions exist in Iraq that have led or will lead to fraud, waste, and abuse of U.S funds and will affect the U.S. effort to achieve our security, economic, and diplomatic goals in Iraq. Addressing these problems will require complete and transparent information on the progress made to reasonably judge our past efforts and determine future directions. This includes more accurate, reliable, and comprehensive information on the cost of the war, the capabilities of Iraqi security forces, and the results of U.S. efforts to build the managerial capacity of the Iraqi ministries. Furthermore, given DOD’s heavy and increasing reliance on contractors in Iraq and elsewhere, and the risks this reliance entails, it may be appropriate to ask if DOD has become too reliant on contractors to provide essential services. Moreover, given the pace of activities during contingency operations, it is essential that DOD and other government agencies engage, as early as possible, in (1) identifying potential support requirements, (2) locating contractors capable of providing support and negotiating with contractors to provide this support in a timely and cost- effective manner, and (3) planning for additional military and civilian personnel to oversee and manage this increase in contractor activities. Mr. Chairman and members of the committee, this concludes my statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph A. Christoff, Director, International Affairs and Trade, at (202) 512-8979; John Hutton, Acting Director, Acquisition and Sourcing Management, at (202) 512-4841; or William Solis, Director, Defense Capabilities and Management, at (202) 512-8365. Other key contributors to this statement Nanette Barton, Dan Cain, Carole Coffey, Allisa Czyz, Tim DiNapoli, Mattias Fenton, Whitney Havens, Patrick Hickey, Wesley Johnson, Hynek Kalkus, Judy McCloskey, Tet Miyabara, James A. Reynolds, Chris Turner, and Marilyn Wasleski. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses some of the systemic conditions in Iraq that contribute to the fraud, waste, or abuse of U.S.-provided funds. Since 2003, DOD has reported total costs of about $257.5 billion for military operations in Iraq; these have increased from about $38.8 billion in fiscal year 2003 to about $83.4 billion in fiscal year 2006. The largest increase has been in operation and maintenance expenses, including items such as support for housing, food, and services; the repair of equipment; and transportation of people, supplies and equipment. Many of the operation and maintenance expenses are for services. Other U.S. government agencies had reported obligations of $29 billion for Iraqi reconstruction and stabilization, as of October 2006. These funds have been used for, among other things, infrastructure repair of the electricity, oil, water, and health sectors; training and equipping of the Iraqi security forces; and administrative expenses. Specifically, the testimony focuses on (1) security, (2) management and reporting of the program to train and equip Iraqi security forces, (3) contracting and contract management activities, and (4) Iraqi capacity and commitment to manage and fund reconstruction and security efforts. Despite U.S. and Iraqi efforts to shift a greater share of the country's defense on Iraqi forces, the security situation continues to deteriorate. Poor security conditions have hindered the management of the more than $29 billion that has been obligated for reconstruction and stabilization efforts since 2003. Although the State Department has reported that the number of Iraqi army and police forces that has been trained and equipped has increased from about 174,000 in July 2005 to about 323,000 in December 2006, overall security conditions in Iraq have deteriorated and grown more complex. These conditions have hindered efforts to engage with Iraqi partners and demonstrate the difficulty in making political and economic progress in the absence of adequate security conditions. GAO's ongoing work has identified weaknesses in the $15.4 billion program to support the development and sustainment of Iraqi security forces. Sectarian divisions have eroded the dependability of many Iraqi units, and a number of Iraqi units have refused to serve outside the areas where they were recruited. Corruption and infiltration by militias and others loyal to parties other than the Iraqi government have resulted in the Iraqi security forces being part of the problem in many areas instead of the solution. While unit-level transition readiness assessments (TRA) provide important information on Iraqi security force capabilities, the aggregate reports DOD provides to Congress based on these assessments do not provide adequate information to judge the capabilities of Iraqi forces. The DOD reports do not detail the adequacy of Iraqi security forces' manpower, equipment, logistical support, or training and may overstate the number of forces on duty. Congress will need additional information found in the TRAs to assess DOD's supplemental request for funds to train and equip Iraqi security forces. DOD's heavy reliance on contractors in Iraq, its long-standing contract and contract management problems, and poor security conditions provide opportunities for fraud, waste, and abuse. First, military commanders and senior DOD leaders do not have visibility over the total number of contractors who are supporting deployed forces in Iraq. As we have noted in the past, this limited visibility can unnecessarily increase costs to the government. Second, DOD lacks clear and comprehensive guidance and leadership for managing and overseeing contractors. In October 2005, DOD issued, for the first time, department-wide guidance on the use of contractors that support deployed forces. Although this guidance is a good first step, it does not address a number of problems we have repeatedly raised. Third, key contracting issues have prevented DOD from achieving successful acquisition outcomes. There has been an absence of well-defined requirements, and DOD has often entered into contract arrangements on reconstruction efforts and into contracts to support deployed forces that have posed additional risk to the government. Further, a lack of training hinders the ability of military commanders to adequately plan for the use of contractor support and inhibits the ability of contract oversight personnel to manage and oversee contracts and contractors in Iraq. Iraqi capacity and commitment to manage and fund reconstruction and security efforts remains limited. Key ministries face challenges in staffing a competent and non-partisan civil service, fighting corruption, and using modern technology. The inability of the Iraqi government to spend its 2006 capital budget also increases the uncertainty that it can sustain the rebuilding effort.
Child pornography is prohibited by federal statutes, which provide for civil and criminal penalties for its production, advertising, possession, receipt, distribution, and sale. Defined by statute as the visual depiction of a minor—a person under 18 years of age—engaged in sexually explicit conduct, child pornography is unprotected by the First Amendment, as it is intrinsically related to the sexual abuse of children. In the Child Pornography Prevention Act of 1996, Congress sought to prohibit images that are or appear to be “of a minor engaging in sexually explicit conduct” or are “advertised, promoted, presented, described, or distributed in such a manner that conveys the impression that the material is or contains a visual depiction of a minor engaging in sexually explicit conduct.” In 2002, the Supreme Court struck down this legislative attempt to ban “virtual” child pornography in Ashcroft v. The Free Speech Coalition, ruling that the expansion of the act to material that did not involve and thus harm actual children in its creation is an unconstitutional violation of free speech rights. According to government officials, this ruling may increase the difficulty of prosecuting those who produce and possess child pornography. Defendants may claim that pornographic images are of “virtual” children, thus requiring the government to establish that the children shown in these digital images are real. Recently, Congress enacted the PROTECT Act, which attempts to address the constitutional issues raised in The Free Speech Coalition decision. Historically, pornography, including child pornography, tended to be found mainly in photographs, magazines, and videos. With the advent of the Internet, however, both the volume and the nature of available child pornography have changed significantly. The rapid expansion of the Internet and its technologies, the increased availability of broadband Internet services, advances in digital imaging technologies, and the availability of powerful digital graphic programs have led to a proliferation of child pornography on the Internet. According to experts, pornographers have traditionally exploited—and sometimes pioneered—emerging communication technologies—from the dial-in bulletin board systems of the 1970s to the World Wide Web—to access, trade, and distribute pornography, including child pornography. Today, child pornography is available through virtually every Internet technology (see table 1). Among the principal channels for the distribution of child pornography are commercial Web sites, Usenet newsgroups, and peer-to-peer networks. Web sites. According to recent estimates, there are about 400,000 commercial pornography Web sites worldwide, with some of the sites selling pornographic images of children. The child pornography trade on the Internet is not only profitable, it has a worldwide reach: recently a child pornography ring was uncovered that included a Texas-based firm providing credit card billing and password access services for one Russian and two Indonesian child pornography Web sites. According to the U.S. Postal Inspection Service, the ring grossed as much as $1.4 million in just 1 month selling child pornography to paying customers. Usenet. Usenet newsgroups also provide access to pornography, with several of the image-oriented newsgroups being focused on child erotica and child pornography. These newsgroups are frequently used by commercial pornographers who post “free” images to advertise adult and child pornography available for a fee from their Web sites. Peer-to-peer networks. Although peer-to-peer file-sharing programs are largely known for the extensive sharing of copyrighted digital music, they are emerging as a conduit for the sharing of pornographic images and videos, including child pornography. In a recent study by congressional staff, a single search for the term “porn” using a file-sharing program yielded over 25,000 files. In another study, focused on the availability of pornographic video files on peer-to-peer sharing networks, a sample of 507 pornographic video files retrieved with a file-sharing program included about 3.7 percent child pornography videos. Table 2 shows the key national organizations and agencies that are currently involved in efforts to combat child pornography on peer-to-peer networks. The National Center for Missing and Exploited Children (NCMEC), a federally funded nonprofit organization, serves as a national resource center for information related to crimes against children. Its mission is to find missing children and prevent child victimization. The center’s Exploited Child Unit operates the CyberTipline, which receives child pornography tips provided by the public; its CyberTipline II also receives tips from Internet service providers. The Exploited Child Unit investigates and processes tips to determine if the images in question constitute a violation of child pornography laws. The CyberTipline provides investigative leads to the Federal Bureau of Investigation (FBI), U.S. Customs, the Postal Inspection Service, and state and local law enforcement agencies. The FBI and the U.S. Customs also investigate leads from Internet service providers via the Exploited Child Unit’s CyberTipline II. The FBI, Customs Service, Postal Inspection Service, and Secret Service have staff assigned directly to NCMEC as analysts. Two organizations in the Department of Justice have responsibilities regarding child pornography: the FBI and the Justice Criminal Division’s Child Exploitation and Obscenity Section (CEOS). The FBI investigates various crimes against children, including federal child pornography crimes involving interstate or foreign commerce. It deals with violations of child pornography laws related to the production of child pornography; selling or buying children for use in child pornography; and the transportation, shipment, or distribution of child pornography by any means, including by computer. CEOS prosecutes child sex offenses and trafficking in women and children for sexual exploitation. Its mission includes prosecution of individuals who possess, manufacture, produce, or distribute child pornography; use the Internet to lure children to engage in prohibited sexual conduct; or traffic in women and children interstate or internationally to engage in sexually explicit conduct. Two other organizations have responsibilities regarding child pornography: the Customs Service (now part of the Department of Homeland Security) and the Secret Service in the Department of the Treasury. The Customs Service targets illegal importation and trafficking in child pornography and is the country’s front line of defense in combating child pornography distributed through various channels, including the Internet. Customs is involved in cases with international links, focusing on pornography that enters the United States from foreign countries. The Customs CyberSmuggling Center has the lead in the investigation of international and domestic criminal activities conducted on or facilitated by the Internet, including the sharing and distribution of child pornography on peer-to-peer networks. Customs maintains a reporting link with NCMEC, and it acts on tips received via the CyberTipline from callers reporting instances of child pornography on Web sites, Usenet newsgroups, chat rooms, or the computers of users of peer-to-peer networks. The center also investigates leads from Internet service providers via the Exploited Child Unit’s CyberTipline II. The U.S. Secret Service does not investigate child pornography cases on peer-to-peer networks; however, it does provide forensic and technical support to NCMEC, as well as to state and local agencies involved in cases of missing and exploited children. Child pornography is easily shared and accessed through peer-to-peer file- sharing programs. Our analysis of 1,286 titles and file names identified through KaZaA searches on 12 keywords showed that 543 (about 42 percent) of the images had titles and file names associated with child pornography images. Of the remaining files, 34 percent were classified as adult pornography, and 24 percent as nonpornographic (see fig. 1). No files were downloaded for this analysis. The ease of access to child pornography files was further documented by retrieval and analysis of image files, performed on our behalf by the Customs CyberSmuggling Center. Using 3 of the 12 keywords that we used to document the availability of child pornography files, a CyberSmuggling Center analyst used KaZaA to search, identify, and download 305 files, including files containing multiple images and duplicates. The analyst was able to download 341 images from the 305 files identified through the KaZaA search. The CyberSmuggling Center analysis of the 341 downloaded images showed that 149 (about 44 percent) of the downloaded images contained child pornography (see fig. 2). The center classified the remaining images as child erotica (13 percent), adult pornography (29 percent), or nonpornographic (14 percent). These results are consistent with the observations of NCMEC, which has stated that peer-to-peer technology is increasingly popular for the dissemination of child pornography. However, it is not the most prominent source for child pornography. As shown in table 3, since 1998, most of the child pornography referred by the public to the CyberTipline was found on Internet Web sites. Since 1998, the center has received over 76,000 reports of child pornography, of which 77 percent concerned Web sites, and only 1 percent concerned peer-to-peer networks. Web site referrals have grown from about 1,400 in 1998 to over 26,000 in 2002—or about a nineteenfold increase. NCMEC did not track peer-to-peer referrals until 2001. In 2002, peer-to-peer referrals increased more than fourfold, from 156 to 757, reflecting the increased popularity of file-sharing programs. Juvenile users of peer-to-peer networks face a significant risk of inadvertent exposure to pornography when searching and downloading images. In a search using innocuous keywords likely to be used by juveniles searching peer-to-peer networks (such as names of popular singers, actors, and cartoon characters), almost half the images downloaded were classified as adult or cartoon pornography. Juvenile users may also be inadvertently exposed to child pornography through such searches, but the risk of such exposure is smaller than that of exposure to pornography in general. To document the risk of inadvertent exposure of juvenile users to pornography, the Customs CyberSmuggling Center performed KaZaA searches using innocuous keywords likely to be used by juveniles. The center image searches used three keywords representing the names of a popular female singer, child actors, and a cartoon character. A center analyst performed the search, retrieval, and analysis of the images. These searches produced 157 files, some of which were duplicates. From these 157 files, the analyst was able to download 177 images.. Figure 3 shows our analysis of the CyberSmuggling Center’s classification of the 177 downloaded images. We determined that 61 images contained adult pornography (34 percent), 24 images consisted of cartoon pornography (14 percent), 13 images contained child erotica (7 percent), and 2 images (1 percent) contained child pornography. The remaining 77 images were classified as nonpornographic. Because law enforcement agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet, we were unable to quantify the resources devoted to investigations concerning peer-to-peer networks. These agencies (including the FBI, CEOS, and Customs) do devote significant resources to combating child exploitation and child pornography in general. Law enforcement officials told us, however, that as tips concerning child pornography on the peer-to-peer networks increase, they are beginning to focus more law enforcement resources on this issue. Table 4 shows the levels of funding related to child pornography issues that the primary organizations reported for fiscal year 2002, as well as a description of their efforts regarding peer-to-peer networks in particular. An important new resource to facilitate the identification of the victims of child pornographers is the National Child Victim Identification Program, run by the CyberSmuggling Center. This resource is a consolidated information system containing seized images that is designed to allow law enforcement officials to quickly identify and combat the current abuse of children associated with the production of child pornography. The system’s database is being populated with all known and unique child pornographic images obtained from national and international law enforcement sources and from CyberTipline reports filed with NCMEC. It will initially hold over 100,000 images collected by federal law enforcement agencies from various sources, including old child pornography magazines. According to Customs officials, this information will help, among other things, to determine whether actual children were used to produce child pornography images by matching them with images of children from magazines published before modern imaging technology was invented. Such evidence can be used to counter the assertion that only virtual children appear in certain images. The system, which became operational in January 2003, is housed at the Customs CyberSmuggling Center and can be accessed remotely in “read only” format by the FBI, CEOS, the U.S. Postal Inspection Service, and NCMEC. In summary, Mr. Chairman, our work shows that child pornography as well as adult pornography is widely available and accessible on peer-to-peer networks. Even more disturbing, we found that peer-to-peer searches using seemingly innocent terms that clearly would be of interest to children produced a high proportion of pornographic material, including child pornography. The increase in reports of child pornography on peer-to-peer networks suggests that this problem is increasing. As a result, it will be important for law enforcement agencies to follow through on their plans to devote more resources to this technology and continue their efforts to develop effective strategies for addressing this problem. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-6240 or by E-mail at koontzl@gao.gov. Key contributors to this testimony were Barbara S. Collier, Mirko Dolak, James M. Lager, Neelaxi V. Lakhmani, James R. Sweetman, Jr., and Jessie Thomas. Peer-to-peer file-sharing programs represent a major change in the way Internet users find and exchange information. Under the traditional Internet client/server model, access to information and services is accomplished by interaction between clients—users who request services—and servers—providers of services, usually Web sites or portals. Unlike this traditional model, the peer-to-peer model enables consenting users—or peers—to directly interact and share information with each other, without the intervention of a server. A common characteristic of peer-to-peer programs is that they build virtual networks with their own mechanisms for routing message traffic. The ability of peer-to-peer networks to provide services and connect users directly has resulted in a large number of powerful applications built around this model. These range from the SETI@home network (where users share the computing power of their computers to search for extraterrestrial life) to the popular KaZaA file-sharing program (used to share music and other files). As shown in figure 4, there are two main models of peer-to-peer networks: (1) the centralized model, in which a central server or broker directs traffic between individual registered users, and (2) the decentralized model, based on the Gnutella network, in which individuals find each other and interact directly. As shown in figure 4, in the centralized model, a central server/broker maintains directories of shared files stored on the computers of registered users. When Bob submits a request for a particular file, the server/broker creates a list of files matching the search request by checking it against its database of files belonging to users currently connected to the network. The broker then displays that list to Bob, who can then select the desired file from the list and open a direct link with Alice’s computer, which currently has the file. The download of the actual file takes place directly from Alice to Bob. This broker model was used by Napster, the original peer-to-peer network, facilitating mass sharing of material by combining the file names held by thousands of users into a searchable directory that enabled users to connect with each other and download MP3 encoded music files. Because much of this material was copyrighted, Napster as the broker of these exchanges was vulnerable to legal challenges, which eventually led to its demise in September 2002. In contrast to Napster, most current-generation peer-to-peer networks are decentralized. Because they do not depend on the server/broker that was the central feature of the Napster service, these networks are less vulnerable to litigation from copyright owners, as pointed out by Gartner. In the decentralized model, no brokers keep track of users and their files. To share files using the decentralized model, Ted starts with a networked computer equipped with a Gnutella file-sharing program such KaZaA or BearShare. Ted connects to Carol, Carol to Bob, Bob to Alice, and so on. Once Ted’s computer has announced that it is “alive” to the various members of the peer network, it can search the contents of the shared directories of the peer network members. The search request is sent to all members of the network, starting with Carol; members will in turn send the request to the computers to which they are connected, and so forth. If one of the computers in the peer network (say, for example, Alice’s) has a file that matches the request, it transmits the file information (name, size, type, etc.) back through all the computers in the pathway towards Ted, where a list of files matching the search request appears on Ted’s computer through the file-sharing program. Ted can then open a connection with Alice and download the file directly from Alice’s computer. The file-sharing networks that result from the use of peer-to-peer technology are both extensive and complex. Figure 5 shows a map or topology of a Gnutella network whose connections were mapped by a network visualization tool. The map, created in December 2000, shows 1,026 nodes (computers connected to more than one computer) and 3,752 edges (computers on the edge of the network connected to a single computer). This map is a snapshot showing a network in existence at a given moment; these networks change constantly as users join and depart them. One of the key features of many peer-to-peer technologies is their use of a virtual name space (VNS). A VNS dynamically associates user-created names with the Internet address of whatever Internet-connected computer users happen to be using when they log on. The VNS facilitates point-to- point interaction between individuals, because it removes the need for users and their computers to know the addresses and locations of other users; the VNS can, to certain extent, preserve users’ anonymity and provide information on whether a user is or is not connected to the Internet at a given moment. Peer-to-peer users thus may appear to be anonymous; they are not, however. Law enforcement agents may identify users’ Internet addresses during the file-sharing process and obtain, under a court order, their identities from their Internet service providers. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The availability of child pornography has dramatically increased in recent years as it has migrated from printed material to the World Wide Web, becoming accessible through Web sites, chatrooms, newsgroups, and now the increasingly popular peer-to-peer file sharing programs. These programs enable direct communication between users, allowing users to access each other's files and share digital music, images, and video. GAO was requested to determine the ease of access to child pornography on peer-to-peer networks; the risk of inadvertent exposure of juvenile users of peer-to-peer networks to pornography, including child pornography; and the extent of federal law enforcement resources available for combating child pornography on peer-to-peer networks. Today's testimony is based on GAO's report on the results of that work (GAO- 03-351), Because child pornography cannot be accessed legally other than by law enforcement agencies, GAO worked with the Customs Cyber-Smuggling Center in performing searches: Customs downloaded and analyzed image files, and GAO performed analyses based on keywords and file names only. Child pornography is easily found and downloaded from peer-to-peer networks. In one search, using 12 keywords known to be associated with child pornography on the Internet, GAO identified 1,286 titles and file names, determining that 543 (about 42 percent) were associated with child pornography images. Of the remaining, 34 percent were classified as adult pornography and 24 percent as non-pornographic. In another search using three keywords, a Customs analyst downloaded 341 images, of which 149 (about 44 percent) contained child pornography. These results are in accord with increased reports of child pornography on peer-to-peer networks; since it began tracking these in 2001, the National Center for Missing and Exploited Children has seen a fourfold increase--from 156 in 2001 to 757 in 2002. Although the numbers are as yet small by comparison to those for other sources (26,759 reports of child pornography on Web sites in 2002), the increase is significant. Juvenile users of peer-to-peer networks are at significant risk of inadvertent exposure to pornography, including child pornography. Searches on innocuous keywords likely to be used by juveniles (such as names of cartoon characters or celebrities) produced a high proportion of pornographic images: in our searches, the retrieved images included adult pornography (34 percent), cartoon pornography (14 percent), child erotica (7 percent), and child pornography (1 percent). While federal law enforcement agencies--including the FBI, Justice's Child Exploitation and Obscenity Section, and Customs--are devoting resources to combating child exploitation and child pornography in general, these agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet. Therefore, GAO was unable to quantify the resources devoted to investigating cases on peer-to-peer networks. According to law enforcement officials, however, as tips concerning child pornography on peer-to-peer networks escalate, law enforcement resources are increasingly being focused on this area.
The current MOBILE model, MOBILE5a, also known as the EPA mobile source emissions factor model, is a computer program that estimates the emissions of carbon monoxide, hydrocarbons, and nitrogen oxides for eight different types of gasoline-fueled and diesel highway motor vehicles. The model consists of an integrated collection of mathematical equations and assumptions about the emissions from vehicles manufactured from 1960 to 2020; generally, the cars produced in the 25 most recent model years are assumed to be in operation in any given calendar year. The first MOBILE model was made available for use in 1978; since that time, major updates and improvements to the model have been made as more has become known about the complexity of the factors affecting vehicle emissions, as measurement devices have improved, and as more data have been collected. According to agency officials, these improvements have resulted in the refinement of emissions estimates for evaporative emissions (such as occur when the fuel tank and fuel system heat up on a hot summer day); for the uncorrected in-use deterioration (wear and tear) that results from poor vehicle maintenance or tampering; and for other factors. In its simplest form, EPA’s MOBILE model allows the model user to produce a number—an estimated quantity of emissions for the three pollutants of concern—by multiplying the estimated emissions per mile for an average urban trip times the estimated number of trip miles traveled in an area. Over the years, however, researchers have learned that vehicle emissions are highly complex. For example, EPA and others have indications today that as much as half of all hydrocarbon emissions from motor vehicles are evaporative emissions, under certain conditions. To compensate for the complexities of these and other emissions-producing activities, EPA has periodically adjusted its basic formula—through the use of revised “correction factors”—to approximate vehicle exhaust emissions in a range of situations. In essence, the correction factor is a multiplier added to the basic formula (miles traveled times emissions rate per mile) to adjust the model’s output to more closely reflect actual emissions. Except for California, EPA supplies the baseline emissions rates and correction factors for other model users—primarily state and local agencies—that typically supply their own estimates of the number of vehicle miles traveled, according to agency officials, as well as many other local area parameters, such as the average ambient temperature, vehicle classifications, and types of fuels sold. The MOBILE model exists because precise information about the emissions behavior of the approximately 200 million vehicles in use in the United States is not known, yet the need exists to estimate the impact of motor vehicles on air quality. For the states, the MOBILE model is a tool for constructing emissions inventories, creating control strategies, producing state implementation plans (SIP), and—subsequently— demonstrating control strategy effectiveness to EPA and others. For example, the states are allowed to vary a number of control strategy features, including the types of fuels used, the type of inspection and maintenance (I&M) testing network, the frequency of I&M testing, the ages and types of vehicles to be inspected, the stringency of the tailpipe test, the number and percent of inspected vehicles that may receive a waiver,and a host of other factors. The states may choose among a number of control options as long as the state’s control strategy achieves at least as many reductions as required by the Clean Air Act. For EPA, the MOBILE model is a tool for evaluating the adequacy of a state’s emissions inventory estimate, motor vehicle control strategies, and implementation plans. In essence, the model’s estimates provide EPA regulators with critical information that is used to evaluate the adequacy of a state’s program and the relative benefits of various policies to control motor vehicle emissions. Additionally, the model’s estimates can affect state policy decisions on issues such as the content and volatility of fuels, and some decisions on highway improvement projects. For example, the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA) required, among other things, that state transportation improvement programs in certain nonattainment areas conform with the applicable state implementation plan developed under the Clean Air Act. Although the model’s original purpose was to support the development of mobile source emissions inventories, over the years its role and influence have been expanded considerably. Today, its estimates have a substantial influence not only on state and local programs but also on the automobile and oil industries, environmental and trade organizations, the public, and others. According to estimates derived partly from the MOBILE model, motor vehicles produce about 90 percent of the carbon monoxide, 50 percent of the hydrocarbons, and 30 percent of the nitrogen oxides emitted annually in major urban areas. EPA officials are examining 14 areas in the current MOBILE model in which major limitations exist. According to agency officials, it is their plan for each new version of the MOBILE model to reflect the most recent testing, data collection, and research that are available. They pointed out that EPA has updated the estimating capabilities of its MOBILE source emissions model 10 times since the model was first introduced in 1978. Table 1 briefly summarizes the areas in which major limitations exist, as well as EPA’s plans to address these limitations in its next revision to the model, MOBILE6, due to be issued in late 1998. (Additional information on these 14 areas is provided in app. I.) While acknowledging that some vehicle emissions-producing activities are not accounted for in the current model and that other emissions-producing activities are not adequately represented in the current model on the basis of the most recent information, EPA officials said that it is important to note that EPA has conducted and/or partially funded some of the studies that have led to the new data that now question the old estimates and assumptions. Additionally, they said that EPA has work under way to address most of these limitations. For example, since its formation in September 1995, an EPA-sponsored Federal Advisory Committee Act (FACA) subcommittee workgroup has identified 47 high-priority items for improvement in the current model, and EPA and workgroup representatives are examining these limitations. EPA is also in the process of developing new procedures for improving models in general, which are discussed below. Several model experts told us that it is the nature of models such as MOBILE5a to have limitations and to be in a continuous improvement mode. Agency officials agreed with this assessment, noting that the current model is better than any previous versions and reflects consistent growth in the quality and quantity of information available on very complex issues. Additionally, they pointed out that—through the FACA workgroup process—the revisions to the next MOBILE model, MOBILE6, have been undertaken with significantly increased openness and input from other government agencies, academia, the automobile and oil industries, environmental groups, and others. Our contacts with representatives of these groups confirmed this increased level of external stakeholders’ involvement in preparation for MOBILE6. Several commended EPA’s efforts in recent years to reach out to persons outside of the agency, and some noted that the outreach effort had given them a much greater appreciation of the model. While acknowledging that, historically, there have been few firm criteria on the processes that should be followed when creating or revising a model such as the MOBILE model, the executive director of EPA’s Science Advisory Board (SAB) told us that the agency has a project under way to develop agencywide procedures for improving models. According to the project director, the Office of Research and Development (ORD) is planning a workshop in December 1997 to discuss the status of modeling across several media and other modeling issues, including the need for better agencywide modeling procedures. The project director also said that the Science Advisory Board’s January 1989 resolution on models was one of the best documents available on the processes for creating and improving models in general. The SAB executive director and the ORD project director told us that in their opinion, there are specific actions—most of which were recommended by the SAB in its 1989 resolution—that, when followed, can enhance a model’s predictive capabilities. Among other things, these actions include the following: Obtaining external stakeholders’ input to ensure that the model’s assumptions and formulas receive critical review by those not involved in the model’s development. Documenting the implicit and explicit assumptions so that others can evaluate the basis of the formulas embedded in the model. Performing sensitivity analyses over key parameters to identify the most sensitive parameters and to establish the areas most in need of further research. Verifying the adequacy of the model’s mathematical code. Testing the model’s predictions with laboratory and field data to confirm that the model generates results consistent with its underlying theory. Conducting peer review to enhance the quality, credibility, and acceptability of the model’s applications. The SAB executive director pointed out that because of continuing concerns with the quasi-regulatory use of agency models, EPA issued agencywide guidance in 1994 specifically calling for the peer review of such models. This directive was a follow-on to EPA’s January 1993 agencywide policy requiring peer review of the scientific and technical work products used to support agency decisions. In September 1996 and March 1997, we reported and testified on the uneven implementation of EPA’s peer review policy, including that the MOBILE model had not been peer reviewed. EPA agreed with our recommendations for educating staff and managers about the merits of and procedures for conducting peer review and for ensuring that all relevant products are considered for peer review. The agency has set in motion a three-pronged approach to improve the implementation of peer review agencywide, including peer review of the next MOBILE model. According to the ORD project director, it is too early in EPA’s study to predict whether the agency may recommend that EPA require its offices to follow the other actions listed above when creating or revising models. According to Office of Mobile Sources (OMS) officials, they plan to carry out all six of the above activities as part of their improvement process for MOBILE6 and noted that some of these activities are already well under way, such as involving external stakeholders. For example, OMS held its first stakeholder meeting in June 1994, established a FACA mobile modeling workgroup in July 1995, and has held five meetings since that time to obtain external views by those not involved in the model’s development, according to agency officials. OMS officials acknowledged that some of the key formulas in the current model have not been properly documented,that full-scale sensitivity analyses have not been performed since May 1990 (when they were performed for MOBILE4.1), that fewer resources have resulted in fewer confirming data, and that the MOBILE model has not been peer reviewed. However, they said they have efforts under way or planned to address these and other modeling needs. For example, one of the recommendations of the FACA mobile modeling workgroup—made up of representatives from EPA, state and local agencies, industry, environmental groups, and academia—is that EPA more fully document the model’s assumptions. Additionally, OMS plans to perform sensitivity analyses for the next version of the model and to have the studies supporting key changes for MOBILE6 peer reviewed. Also, OMS officials explained that as changes are proposed for each area of major limitations in the model, they plan to have the entire area peer reviewed. Agency officials explained that declining modeling resources have affected the pace of model improvements over the years, particularly their ability to confirm the model’s estimates with large numbers of vehicle tests. For example, a study of the emissions characteristics of 100 passenger cars for both exhaust and evaporative emissions could cost from $1.4 to $1.6 million, according to the agency’s current estimates, and still not address the emissions impacts of road grade, air conditioning, or most fuel studies. Studies of heavy duty trucks and other larger vehicles would cost considerably more. As an illustration of the magnitude of the task compared with available resources, we obtained EPA’s estimate of mobile modeling needs in response to the mobile source requirements envisioned for the 1990 Clean Air Act amendments. This June 1990 analysis indicated that the Office of Mobile Sources would need about $60 million for the modeling improvements known at that time. Since then, because of higher priority needs, OMS has been allocated only $21.8 million, cumulatively, for modeling improvements, although the research needs have increased. However, many more groups have become involved in non-EPA-funded vehicle emissions studies than in the past, allowing EPA to benefit from their studies and observations. Additionally, in some instances, researchers have sought EPA’s input on study protocols beforehand and have shared the data collected with EPA afterwards. While concerned about resources, OMS officials explained that making model improvements is an ongoing, continuous process—and one that will continue after MOBILE6 is issued in 1998. They pointed out that their goal is for each new version of the MOBILE model to reflect the latest testing, data collection, and research. While still not able to quantify the improvements, they said that in their opinion, each new version of the MOBILE model is better than its predecessor. We provided copies of a draft of this report to the Environmental Protection Agency for its review and comment. We obtained comments from EPA officials, including the Director of the EPA Office of Mobile Sources. EPA agreed with the overall message of the report but expressed concerns with imprecise language and suggested several changes to clarify information in the report. For example, EPA suggested that in lieu of describing some limitations associated with its use of the Federal Test Procedure as a basis for estimating emissions as “FTP assumptions,” since the FTP is a specific, codified test cycle, these would be better described as “FTP parameters.” EPA also suggested that we provide specific citations for four studies referred to in appendix I. We made the language changes suggested by EPA, including adding the citations. EPA was also concerned that the report did not clearly distinguish between limitations that may result in only trivial emissions impacts and those that could be significant. Where it was possible to do so, we believe the estimated emissions impact had already been quantified or qualitatively described in the report. Additionally, as noted in the section on the uncertainty limitation, one of the 14 limitations of the MOBILE model is that it does not currently have information about, nor estimates of, the uncertainty associated with its emissions estimates. However, we agree that researchers viewed some limitations as having a more significant impact on emissions than others, and we have provided this view in the report. Appendix II contains the agency’s overall written comments. We conducted our review from October 1996 through August 1997 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is provided in appendix III. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Administrator of EPA and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-9692 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. According to agency officials and model experts we contacted, it is the nature of models to have limitations and to be in a continuous improvement mode. As a result, the Environmental Protection Agency (EPA) has periodically updated the estimating capabilities of its mobile source emissions model to reflect new information as data have become available; MOBILE5a reflects the 10th major revision since the model was first introduced in 1978. The following sections provide additional information on 14 areas in which major limitations exist in EPA’s current MOBILE model, MOBILE5a. The underlying basis for EPA’s original model, and all subsequent versions, has been the Federal Test Procedure (FTP), a laboratory dynamometertest used to certify new cars against new-car emissions standards. The FTP is roughly based on a typical urban area trip, complete with starts and stops, covering 7.5 miles in the Los Angeles urban area in the late 1960s. Such a trip is known as a driving cycle, which can be approximated on a dynamometer. Primarily because of the limitations in past dynamometers, the FTP driving cycle parameters stipulate, among other things, that vehicles average 19.6 miles per hour (mph) over the 7.5 mile trip, do not exceed 57 mph, accelerate gradually (not to exceed 3.3 mph/second), and travel on a flat surface. Additionally, EPA added 10 percent to the FTP dynamometer load in an attempt to simulate the effects of air conditioner usage. However, five of the major limitations in the current MOBILE model relate to FTP parameters. These five are (1) emissions from road grade, (2) emissions from air conditioner usage, (3) emissions at higher speeds, (4) emissions from aggressive driving, and (5) emissions immediately after engine start-up (cold- start). Agency officials have long recognized that some of these original FTP parameters were not representative of actual driving conditions and, to compensate for these limitations, have added correction factors to the MOBILE model to estimate what emissions would be for speeds in excess of 57 mph, for rapid acceleration beyond 3.3 mph/second, and for other scenarios, such as different temperatures or different fuels. For example, the impact of temperature on emissions can be substantial. Consequently, while FTP testing has been performed between 68 and 86 degrees Fahrenheit, EPA’s MOBILE model used a correction factor to estimate that 1995 exhaust emissions of hydrocarbons would be 3 times greater at 25 degrees than at the FTP temperatures. However, MOBILE5a does not account for the impact of road grade—such as when a car climbs a hill—although some studies have indicated that both the increased load on the engine from climbing a hill and the decreased load that accompanies engine deceleration significantly increase vehicle emissions. According to agency officials, it is not expected that MOBILE6 will have adjustments for road grade, although such adjustments are being planned for MOBILE7. In addition to being uncertain about the amount of emissions related to road grade, EPA officials explained that obtaining the basic data from instrumented cars and chase cars to make such adjustments would be expensive at this time and that because of the cost and length of time required for these studies, the impact of road grade will probably not be addressed until MOBILE7. Additionally, an equally important consideration is that once these basic data on the effects of road grade on emissions are obtained, state and local agencies would have to plot road grades for millions of miles of roadways in their jurisdictions, also a costly and time-consuming activity. However, as global positioning technologyfor vehicles becomes less costly and more widely available and used, it is envisioned that the impacts of road grade emissions will be modeled in the future. According to EPA officials, the amount of data needed to estimate the impact of road grade is still years away. They also noted that time would be needed to develop consistent guidance on how state and local agencies should go about collecting these data. With respect to the representation of other emissions-producing activities not represented by the FTP, EPA officials have made periodic adjustments but recognize that the current model’s correction factor adjustments may not reflect the most up-to-date information. For example, as noted in the EPA-FACA materials disseminated in March 1997, the 10-percent additional load intended to simulate the effect of air conditioner usage “is obsolete.” More recent information indicates that nitrogen oxide emissions may be from 30 to 75 percent greater at some speeds than the current model’s estimates when the air conditioner is used. Additionally, increases in speeds above 65 mph have left data gaps in the current model that are not adequately represented by existing correction factors. Others have recognized that the MOBILE model’s estimates are inextricably tied to the FTP’s parameters, and some studies have questioned the representativeness of key assumptions as they relate to the FTP. For example, the California Air Resources Board (CARB) commissioned Sierra Research in 1993 to develop an improved driving cycle—known as the Unified Driving Cycle—by using an instrumented “chase car” to better characterize typical urban driving patterns. Among other things, this driving cycle allows cars to travel up to 67.2 mph (versus 57 mph for the FTP), allows for acceleration at a rate of up to 6.9 mph/second (versus a maximum of 3.3 mph/second for the FTP), and uses an average speed of 24.6 mph (versus 19.2 for the FTP). Several model experts believe these parameters more closely approximate actual driving conditions today. According to a 1993 CARB study, the FTP may underestimate hydrocarbon, carbon monoxide, and nitrogen oxide emissions by 27, 68, and 17 percent, respectively. A 1993 study sponsored by EPA’s FTP improvement project found that more than one-third of the trips studied had acceleration rates of more than 7 mph/second—more than double the FTP’s maximum of 3.3 mph/second rate. Similarly, another 1993 EPA-sponsored study of instrumented vehicles in the Baltimore area found that 18 percent of total driving time in the area was composed of higher speeds and sharper accelerations than those represented on the FTP test. Also, a 1995 National Research Council report noted that aggressive driving with many accelerations resulted in hydrocarbon and carbon monoxide emissions being 14 and 15 times higher, respectively, than the emissions from average driving over the same 7-mile trip. According to the 1995 National Research Council report, “Virtually all motor vehicle testing has been based on a limited set of driving test cycles that inadequately represent current urban driving conditions.” However, one model expert told us that it took more than 1 year to evaluate one component of the model and that collecting vehicle emissions data on large data sets is very costly. EPA officials pointed out that there is not a consistent definition of what constitutes aggressive driving, that aggressive driving happens only over a portion of the trip and is highly variable among drivers, and that the above observations are not representative of average driving patterns. The Congress has also recognized that the FTP may not reflect actual driving conditions. Concerned about the gap between emissions as measured by the FTP and actual, real-world emissions, in 1990 the Congress added Section 206(h) to the Clean Air Act, which required EPA to review and revise the FTP within 18 months “to insure that vehicles are tested under circumstances which reflect the actual current driving conditions under which motor vehicles are used.” EPA’s October 1996 final rule on FTP revisions addressed four emissions-producing activities that, according to the rule’s preamble, are not adequately represented in the current FTP. These emissions-producing activities include (1) aggressive driving behavior (such as high acceleration rates and high speeds), (2) rapid speed fluctuations (such as quick deceleration), (3) emissions immediately after engine start-up, a period when—because of the fact that engines are designed to operate at higher temperatures—emissions typically bypass emissions controls for an estimated 3 to 5 minutes until the engine reaches normal operating temperatures, and (4) actual air conditioner usage. EPA has not yet revised the MOBILE model to reflect the results of recent studies that have led to these FTP rule revisions but has work ongoing in all four areas. According to agency officials, although adjustments had been made to the MOBILE model for most of these activities prior to issuing the revised FTP rule, given the state of knowledge today, it appears that these activities may not be adequately represented in MOBILE5a. EPA officials told us that incorporating new estimates for these emissions-producing activities would be a high priority for MOBILE6, due to be issued in late 1998. For example, in March 1997 agency officials announced their plans to substantially revise the cold-start segment of the next MOBILE model, moving—for the first time ever—from an areawide, trip-based model to a roadway-specific model that also separately accounts for start-up emissions. Under this revised model, the magnitude of start-up emissions will not depend on vehicle speed or the driving cycle. Instead, EPA is proposing to allow model users to model the emissions impacts of cold starts on the basis of local areas’ estimates of the number of such starts. Additionally, model users will be able to estimate emissions for three different types of roadways—freeways, arterials, and local roadways. In addition to concerns about the representativeness of the FTP parameters, the following issues were also identified by model experts, workgroup participants, and/or stakeholders we contacted. In each instance, EPA officials agreed that the limitation is an area of concern and in most cases noted that the agency has ongoing work to address the issue, which is discussed below. One concern is the representation of high emitters in EPA’s MOBILE model database, since the data now indicate that this group of vehicles accounts for a disproportionate amount of an area’s overall emissions and that if this subset of the overall vehicle population is underrepresented, the impact on the emissions estimates can be substantial. For example, Sierra Research testified in 1995 that the worst polluting 22 percent of the vehicles produce about 50 percent of the emissions, and EPA estimates that, overall, from 10 to 30 percent of the vehicles cause the bulk of the pollution problems. As noted by the nonfederal co-chair of the FACA modeling workgroup in a 1993 study, “The general problem of failing to control for significant factors is compounded by the likelihood that a small fraction of the vehicle fleet are currently responsible for a large percentage of vehicle emissions.” He told us that he still believes this to be one of the most significant issues facing EPA today, primarily because a very small number of vehicles can potentially be responsible for unusually high levels of pollution. This was also a significant issue for one of the Session Chairs for the Coordinating Research Council’s (CRC) April 1997 Workshop, who noted that, in his opinion, this is the single greatest issue that EPA faces—how to identify and repair high emitters and properly represent them in the modeling database. The nonfederal co-chair also noted that if the occurrence of such vehicles is not properly represented in the model, the model’s emissions estimates can be seriously flawed. He and others have concerns that the existing database may underrepresent high emitters because, among other reasons, the owners of such vehicles may avoid surrendering such vehicles for inspection and maintenance (I&M) and other testing at a higher rate than the normal population. As noted in a February 1996 study, “individuals with intentionally tampered or poorly maintained vehicles may be less likely to offer their vehicles for testing.”Additionally, a 1993 CARB study of 186 vehicles indicated that high emitters could represent 16.8 percent of the California fleet, or nearly 5 times the assumption in the California model. EPA officials have some concerns with the study, its reliance on remote sensing devices, and its applicability to other states. Additionally, EPA officials believe that the larger data sets provided by their ongoing I&M lane testing in three other states properly identify most high emitters. However, they agreed that appropriate representation of high emitters is important to the model’s emissions estimates and noted that this is also a high priority issue currently being addressed by EPA and one of the subgroups of the mobile modeling workgroup. A second concern is the current correction factors for lower volatility fuels and for oxygenated fuels. For example, a February 1997 report by Sierra Research found, among other things, that MOBILE5a likely underestimates the impact of low reid vapor pressure (RVP) fuels on hydrocarbon and carbon monoxide emissions at temperatures above 75 degrees. The report notes that the correction factor for low RVP fuels has not changed since February 1989, when limited data on fuels with RVPs lower than 9.0 pounds per square inch (psi) caused EPA to place a constraint code in the model precluding users from being able to calculate reductions below this level. The limited data collected since that time indicate that reducing fuel RVP from 9.0 psi to 7.0 psi may reduce hydrocarbon and carbon monoxide exhaust emissions from 18 to 27 percent more than the model estimates, respectively. The January 1997 Auto/Oil Air Quality Improvement Research Program (AQIRP) Final Report suggested that reducing fuel volatility by 1 psi, from 9.0 to 8.0 psi, would reduce exhaust CO by 9 percent, exhaust HC by 4 percent, and total evaporative HC by 34 percent (NOx remained unchanged). The RVPs for most fuels used to be higher than 9.0 psi, but today they can go lower than 7.0 psi. Similarly, the model currently has no emissions reduction credits for low sulfur fuels (except, according to agency officials, for the lower sulfur effect in reformulated gasoline), although recent studies suggest that lowering the concentration of sulfur in fuel reduces the emissions of hydrocarbons and nitrogen oxides. EPA officials said that they plan to eliminate the constraint code in the next model, MOBILE6, which will allow users to receive credit for correction factors for fuels lower than 9.0 psi; however, they noted that work in this area is still ongoing and that the data on the emissions benefits of lower RVP fuels, as well as low sulfur fuels, are limited. In addition, a 1996 National Research Council study suggested that the model may overestimate the benefits of oxygenated fuels. For example, the study noted that EPA’s MOBILE model “apparently overpredicts the oxygenated fuel effect by at least a factor of two” when the model’s estimate of carbon monoxide reductions is compared with observed data. Similarly, a 1997 study of wintertime oxygenated fuels suggested that the observed oxygenated fuel benefits were much lower than the 20 to 30 percent estimated by EPA’s model. EPA officials agreed that this is also an area that needs more study, but one which they plan to address in MOBILE6. A third concern is MOBILE5a’ estimates of emissions system deterioration for vehicles with more than 50,000 odometer miles. This concern stems from studies that have questioned the rate and quantity of the data supporting EPA’s significantly higher rate of emissions system deterioration once vehicles reach 50,000 odometer miles. For example, prior to MOBILE5, the model assumed that a vehicle with 100,000 miles emitted about 1.0 grams of hydrocarbons for each mile driven, or about 4 times the amount a new car would emit. However, EPA adjusted the deterioration rates for vehicles with more than 50,000 miles beginning with MOBILE5 (Dec. 1992) so that the MOBILE model’s deterioration formula now calculates that the same car was emitting about 2.0 grams of hydrocarbons for each mile driven, or about 8 times the amount a new car would emit. EPA acknowledges that these adjustments were made on the basis of limited data and that only recently have 1990-technology vehicles become old enough to accurately assess their emissions deterioration. An October 1996 Sierra Research study of 75 vehicles with over 100,000 odometer miles questioned whether EPA had perhaps adjusted the formula too much, resulting in a model that currently overestimates the emissions from vehicles with 50,000 or more odometer miles. Among other things, the study found that EPA’s current model estimated that 80 percent or more of these higher mileage vehicles (which, on average, had accumulated 123,900 odometer miles) would be high emitters, whereas the study found that only 32 percent of the vehicles fell into this category. Similarly, an April 1997 study of 227 vehicles (model years 1991 to 1993) with more than 50,000 odometer miles found no significant changes in emissions or deterioration as indicated by the current model. OMS officials said they used all the data that were available to them at the time (1991-1992) to estimate the deterioration rate of such vehicles and that researchers since then had had more time and more vehicles to test than were available to EPA. They also said the model’s correction factor for vehicles with 50,000 or more odometer miles would likely be lowered in MOBILE6, but they were uncertain at the time of our audit how much this emissions estimate would be reduced. A special subgroup of the FACA mobile model workgroup has been established to address this issue, and their work is still ongoing. A fourth concern is MOBILE5a’ emissions credits and assumptions about inspection and maintenance programs. According to a 1995 National Research Council report, vehicle condition—whether the vehicle is well maintained, or has been tampered with or is malfunctioning—is more important than vehicle age in determining emissions. Among other issues, there is a need to update the basic data supporting I&M emissions reduction credits to reflect a growing population of vehicles in which rates of tampering may be diminishing since tampering with newer vehicles adversely affects gas mileage and vehicle performance. Also, vehicle owners often replace older, carbureted vehicles with newer fuel-injected vehicles. Additionally, according to agency officials, the current model provides no additional I&M credits for vehicles equipped with on-board diagnostics(OBD), a requirement for all 1994 and later light duty vehicles and trucks. This vehicle computer technology alerts a car owner when an emissions system malfunctions, permitting quicker repairs than when such malfunctions are identified through an I&M testing program, and diagnostic trouble codes assist mechanics in making better repairs. Additionally, newer vehicles have up to 8 years or 80,000-mile emissions control system warranties for two components (the on-board computer and catalytic converter), which should equate to less-polluting vehicles as a result of more durable emissions control systems and the requirement that manufacturers cover the costs of certain repairs. The current model does not provide specific credits for this growing population of OBD-equipped vehicles designed and believed to have less in-use deterioration than their predecessors. Also, more recent and more complete data are needed on the effectiveness of repairs in an I&M program, including the adequacy and durability of these repairs, actual participation rates, and impact of remote sensing efforts. Except for remote sensing, the current model’s estimates for these parameters is based on aging and limited data. For example, EPA has not performed any tampering surveys since 1992, and agency officials said that as a result of this lack of data, the tampering assumptions for MOBILE6 will remain unchanged from MOBILE5a. However, EPA’s goal for MOBILE6 is to provide users with greater flexibility in designing I&M programs, as long as the state or local programs’ estimated I&M credits can be substantiated with state or local data. With respect to newer vehicles equipped with on-board diagnostics, because of the limited data on the longer-term emissions impact of this technology, the agency has provided credit for these OBD-equipped vehicles equal to that provided for operating an enhanced I&M program. EPA officials said that, in addition to the states’ own studies, the agency currently has I&M effectiveness studies being carried out in three states, but they were unsure whether sufficient data would be available in time to further revise the I&M assumptions in MOBILE6. A fifth concern is the proper representation of diurnal emissions. Diurnal emissions refer only to hydrocarbons and are a form of evaporative emissions that occur when a vehicle is parked and the ambient temperature is fluctuating. For all previous versions of the MOBILE model, the data supporting these 8- to 24-hour emissions estimates were collected during a 1-hour period during which temperatures were forcibly increased over a range of temperatures. More recent testing over 24-hour and longer periods without constraining temperature increases to a one-hour period indicates some differences from the MOBILE5a estimates for such evaporative emissions. Also, evaporative emissions from vehicles with fuel leaks are now believed to be so significant that, for MOBILE6, EPA plans to model these emissions separately from other evaporative emissions. According to EPA’s most recent data, indications are that some vehicles with fuel leaks—similar to super emitters of exhaust/tailpipe emissions—can exceed the evaporative emissions of corresponding vehicles by one to two orders of magnitude. A 1996 automotive industry study of 150 vehicles found that 24-hour diurnal emissions ranged from 0.6 grams of HC to 777.2 grams of HC, with vehicles with liquid fuel leaks providing the vast majority of the emissions. EPA officials explained that while they may develop a separate category for some vehicles with significant fuel leaks, this does not necessarily mean there will be a significant overall increase in evaporative emissions estimates because, until more data are collected, there is no clear indication that these emissions were significantly underestimated in prior diurnal estimates. EPA has testing under way to determine how to better define this category of vehicles with significant fuel leaks and also plans tests to estimate their distribution within the current fleet, rate of occurrence as a function of accumulated mileage, vehicle age, and/or vehicle technology, such as fuel tank design. Agency officials are uncertain at this time whether the correction factors for other evaporative emissions estimates will be revised for MOBILE6. A sixth concern is the adequacy of the data supporting MOBILE5a’assumptions about emissions for in-use I&M heavy duty vehicles. According to EPA’s June 1994 workshop on state needs, the in-use credits for heavy duty gasoline-powered vehicles are based on data approximately 20 years old, and there has been much change in the technology and emissions rates of these vehicles since that time. Still, the certification standards are higher for heavy duty vehicles than their light duty counterparts, and they are generally older and driven more miles annually than their light duty counterparts. EPA officials said that testing heavy duty vehicles is difficult and quite expensive and agreed that there is a lack of recent data on the in-use emissions from this category of vehicles once they have been put in service. While some studies are under way, EPA does not envision at this time that significant changes in the in-use emissions rates for heavy duty vehicles will be included in MOBILE6. A seventh concern is the fleet characterization data in EPA’s database, stemming from a concern that much of the data used for MOBILE5a are quite old. For example, MOBILE5a’ estimates are based on the assumption that, on average, light duty vehicles are driven about 14,000 miles annually when new, decreasing to less than 10,000 miles annually after 10 years. More recent data from the U.S. Department of Transportation indicates that passenger cars are driven about 2,000 miles more annually than currently estimated by EPA’s MOBILE model, or nearly a 10-percent increase over MOBILE5a. According to a 1996 report, because of the linkage between odometer mileage and I&M program assumptions, a small change in mileage accumulation rates can result in a large impact on emissions estimations. EPA officials pointed out that the agency’s guidance encourages model users to provide their own accumulated mileage estimates; thus, they said the default values for accumulated mileage in MOBILE5a would be a problem only in those cases in which model users fail to provide their own accumulated mileage estimates. Additionally, heavy duty gasoline-powered vehicles, which have higher certification standards than their light duty counterparts, are believed to comprise a significantly larger percentage of the overall vehicle fleet than currently estimated by the MOBILE model. According to OMS, the agency plans to update the fleet characterization data for MOBILE6, including reflecting the increases in the heavy-duty vehicle population. Similarly, another fleet characterization issue involves urban buses. For example, the current model does not have a separate classification for urban buses, although this is a growing vehicle category in many urban areas with unique operating characteristics, such as very frequent starts and stops. EPA officials explained that while buses have not been a separate category in MOBILE5a, EPA plans to expand the current list of vehicle categories from 8 to 20, one of which will be a separate category for buses. An eighth concern is the level of distinctions in roadway classifications. The MOBILE model was originally designed only for estimating areawide emissions on the basis of assumptions associated with an entire trip. It was not designed for making decisions for various roadway classifications, such as transportation improvement projects for urban interstate, rural arterial, or urban feeder/collector streets. Several model experts have pointed out that the same average travel speed—35 mph, for instance—would indicate smooth traffic flow on a local street but severe congestion on a freeway. EPA officials pointed out that MOBILE6 will allow users to separate start emissions from any linkage to the FTP driving cycle assumptions and will also provide different correction factors for speed and driving cycle for three different types of roadways—freeways, arterials, and local roadways. Additionally, while not planned for MOBILE6, the agency plans to partially fund ongoing research with the Department of Transportation to develop a modal emissions model that may one day allow users to model additional parameters, such as the relative emissions impact of sequencing traffic signals to enhance traffic flow. As noted above, an EPA-sponsored FACA mobile model workgroup made up of representatives from other federal, state, and local government agencies, academia, the automobile and oil industries, environmental groups, and others has been assisting EPA in improving the current model, and much of the research to fill data gaps and update aging databases was still ongoing at the time of our audit. Agency officials said that it is their plan for each new version of the MOBILE model to reflect the most recent testing, data collection, and research that are available. Except for the impact of road grade on emissions and revising the in-use credits for heavy duty vehicles, EPA officials said they plan to address each of the above limitations in the next revision, MOBILE6. However, as discussed below, the agency will not be able to quantify the uncertainty associated with its MOBILE model estimates, primarily because of the complexity and timing of factors affecting vehicle emissions and the high cost of vehicle studies. “Uncertainty is pervasive in all three emission modeling components: vehicle activity, activity-specific emission rates, and emission rate correction factors. Uncertainty is compounded in the methodologies used to develop the emission inventory. That is, vehicle activity uncertainty is combined with emission rate uncertainty that has already been combined with correction factor uncertainty.” Additionally, the limited work in this area indicates there are significant uncertainties associated with the current MOBILE model’s estimates. For example, one study found that “the range of uncertainty is huge” for a change in one variable—average vehicle speed—of the many variables contained in the MOBILE model. According to the study, most model users generally believe that increasing average vehicle speed from 30 mph to 50 mph will reduce vehicle emissions (because of less congested driving, with more driving at cruising speeds). The study noted that EPA’s MOBILE model estimates a 24-percent reduction in carbon monoxide emissions by increasing average vehicle speed from 30 mph to 50 mph. However, when a 95-percent confidence interval is applied, the change in emissions can range from a 72-percent decrease in carbon monoxide emissions to a 75-percent increase in such emissions. Similarly, a 1996 study of EPA’s speed correction factors for vehicle exhaust emissions found substantial uncertainty in EPA’s current MOBILE model. Among other things, the study concluded that the MOBILE model may significantly underestimate carbon monoxide and hydrocarbon emissions—“by up to 3 orders of magnitude”—as the model relates to changes in vehicle speed. According to Office of Mobile Sources officials, EPA has been unable to quantify the model’s uncertainty primarily because of the cost and time associated with such quantification, the fact that the on-road vehicle population is a constantly changing universe of differing emissions control devices and levels, technological limitations in measurement devices, and because there is substantial naturally occurring variability in vehicle emissions (leading to further data gaps/limitations). Agency officials pointed out that there is substantial variability across (1) vehicle types (such as model year, emissions control system, engine type), (2) vehicle operating conditions (cold start, load, speed), (3) the external environment (road grade, temperature, humidity, altitude), (4) vehicle fuels (reformulated, oxygenated, reid vapor pressure), and (5) driver behavior (quick starts and stops, timing and frequency of trips). For these reasons, OMS officials told us they do not plan to develop uncertainty ranges with the next revision to the MOBILE model. Similar to the global positioning issue for road grade, they said significant technological advancement may be needed before it becomes cost-effective to address this issue. For example, future vehicles may have on-board computers with the ability to instantaneously record and later report emissions under different operating scenarios. EPA officials pointed out that it is their plan, at some point in the future, to report uncertainty ranges for some model estimates. They said they are currently saving both qualitative and quantitative descriptors for the data being collected by and for the agency in order to perform these calculations in the future. The Chairman, Subcommittee on Oversight and Investigations, House Committee on Commerce, asked us to (1) describe the major limitations in the current version of EPA’s MOBILE model and (2) describe EPA’s process for improving both current and future versions of the MOBILE model. To describe the major limitations in the current model, we obtained and reviewed the MOBILE5a User’s Guide; EPA/OMS model documentation; the most recent published sensitivity analyses (MOBILE4.1, May 1990); relevant EPA guidance and memorandums on the MOBILE model; selected vehicle studies; the results of stakeholders meetings about the model; and the charter, objectives, minutes, and proceedings of the EPA-Federal Advisory Committee Act mobile modeling workgroup. We also reviewed five electronic databases for studies pertaining to the EPA MOBILE model; attended one mobile sources symposium where modeling issues were discussed; and attended the March 1997 FACA public workshop. We also obtained and discussed studies relating to potential MOBILE model limitations with selected representatives of state and local agencies, academia, industry, environmental groups, consulting firms, and other government agencies. Additionally, we interviewed officials and obtained documents from EPA’s Office of Mobile Sources in Ann Arbor, Michigan; Office of Research and Development in Research Triangle Park, North Carolina, and Athens, Georgia; and the EPA Science Advisory Board in Washington, D.C. We also discussed model limitations with individuals identified to us by EPA or other representatives, as well as through our own efforts as noted above. To describe EPA’s process for improving both current and future versions of the MOBILE model, we obtained and discussed information from knowledgeable EPA/OMS air quality officials relative to ongoing activities and documented plans for making model revisions. We also discussed EPA’s past, ongoing, and planned actions with representatives of academia, industry, environmental groups, consulting firms, and other government agencies and observed one process—EPA’s open solicitation of input by external stakeholders not involved in the model’s development—at work. We also obtained documents and discussed EPA’s process for improving models in general with the Science Advisory Board and EPA’s Office of Research and Development. We conducted our review from October 1996 though August 1997 in accordance with generally accepted government auditing standards. Lawrence J. Dyckman, Associate Director William F. McGee, Assistant Director Judy K. Pagano, Senior Operations Research Analyst James R. Beusse, Evaluator-in-Charge Hamilton C. Greene, Jr., Staff Evaluator DeAndrea M. Leach, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) MOBILE series of complex computer models to estimate motor vehicle emissions, focusing on the model's major limitations and EPA's process for improving the current and future versions of the model. GAO noted that: (1) EPA and a group of stakeholders have identified 14 major limitations in the current MOBILE model; (2) some vehicle emissions-producing activities are not accounted for in the current model, and other emissions-producing activities may not be adequately represented on the basis of the most recent information; (3) according to EPA, much of this information has become available since MOBILE5a was released; (4) these limitations cause the model to underestimate vehicle emissions in some cases and overestimate them in others; (5) other studies indicate that some activities are inadequately represented in the model; (6) another study indicates that carbon monoxide and hydrocarbon emissions from higher mileage vehicles may be significantly less than the model's estimates; (7) EPA plans to address most of these limitations in its next revision to the MOBILE model, however, according to agency officials, three of the limitations will probably not be addressed until later because of a combination of factors; (8) according to agency officials, these include the negligible impact on emissions inventory predictions, a relatively low priority ascribed by EPA and stakeholders, the cost and length of time required for these studies relative to the schedule for release of MOBILE6, and the emergence of new technologies that will make the improvements more feasible or cost effective in a few years; (9) EPA officials pointed out that they have updated the estimating capabilities of the MOBILE model 10 times since it was first introduced in 1978; (10) irrespective of these limitations, there are specific actions, most of which were recommended by the Science Advisory Board in its 1989 resolution, that, when followed, can enhance a model's estimating capabilities; (11) among other things, these actions involve documenting the implicit and explicit assumptions that are the basis of the formulas contained in the model, obtaining external stakeholders' input during the model's development, and having the model peer reviewed before it is used; (12) EPA officials acknowledged that, primarily because of resource limitations, until recently such actions have been delayed or forgone; (13) however, EPA is developing the next model, MOBILE6, with significantly increased openness and input from other stakeholders; and (14) EPA also plans to carry out the actions recommended by the Science Advisory Board, such as peer review, as part of its program for developing MOBILE6, due to be issued in late 1998.
Under legislation that established Advocacy in its current form in 1976, Advocacy’ duties are to serve as a focal point for small businesses’ concerns about the federal government’s policies and activities; advise small businesses on how to interact with the federal develop proposals for federal agencies on behalf of small businesses; represent the views and interests of small businesses before federal agencies; enlist the cooperation and assistance of public and private agencies, businesses, and other organizations in disseminating information about the federal government’s programs and services that benefit small businesses. Since its establishment, a series of laws and executive orders has increased Advocacy’s roles and responsibilities. First, in 1980 the White House Conference on Small Business made recommendations that led directly to the passage of the RFA, which requires government agencies to consider the effects of their regulatory actions on small entities and, where possible, mitigate them. Under the RFA, agencies provide a small business impact analysis, known as an initial regulatory flexibility analysis, with every proposed rule published for notice and comment and a final regulatory flexibility analysis with every final rule. The Chief Counsel for Advocacy was charged with monitoring federal agencies’ compliance with the act and with submitting an annual report to Congress. Second, in 1996 the Small Business Regulatory Enforcement Fairness Act (SBREFA) provided for the judicial review of agency compliance with key sections of the RFA. It also established a requirement that EPA and OSHA convene panels whenever these agencies are developing a rule for which an initial regulatory flexibility analysis would be required (SBREFA panels). These panels consist of the agency, OIRA, and Advocacy. The 2010 Dodd-Frank Act added the newly created CFPB to the agencies required to convene SBREFA panels. The SBREFA panels meet with representatives of the affected small businesses to review the agencies’ draft proposed rules, identify alternative approaches to the rules, and provide insight on the anticipated impact of the rules on small entities. The panels issue a report, including any recommendations for minimizing the economic impact of the rule on small entities. Third, Advocacy’s responsibilities were further expanded by Executive Order 13272, which was issued in 2002. The order required each agency to establish procedures and policies to promote compliance with the RFA and to publish a response in the Federal Register to any written comment received from Advocacy on rules published. This requirement was codified by the Small Business Jobs Act of 2010. Executive Order 13272 also directs Advocacy to provide training to federal agencies on how to comply with the RFA. Until 2010, Advocacy’s budget was part of SBA’s. As of fiscal year 2010, however, Advocacy was given statutory line-item funding in a Treasury account separate from other SBA funding, with Congress setting the amount available for Advocacy’s direct costs. In fiscal year 2014, Advocacy’s enacted budget was $8.75 million. Its fiscal year 2015 budget request was $8.46 million. Of that amount, $7.75 million (92 percent) is to be used to fund compensation and benefits for Advocacy’s professional staff, with the balance of Advocacy’s budget split almost equally between external research and all other direct expenses. Advocacy is organized in five offices, as shown in figure 1. Advocacy’s Office of Economic Research produces both internal and external research, which is publicly disseminated, on a variety of small- business issues. More specialized research—requiring proprietary data or econometric analysis—typically is conducted by contractors (external). SBA handles the contracting process for Advocacy. The contracts generally last for 1 year. Advocacy economists act as the official contracting officers’ representatives (contracting officer) overseeing and coordinating the work of the contractors. The contracting officers maintain a contract file for each external research product. Each year the Office of Economic Research solicits research topics from Advocacy staff and small business stakeholders, such as associations comprised of small businesses. In addition, Congress requests studies, either formally (by putting the requirement into a law), or informally (through discussions with Advocacy staff). A final list of potential research is presented to the Chief Counsel before the beginning of the fiscal year, and the counsel chooses the topics for the year. In fiscal year 2013, Advocacy produced 22 research products on topics that included access to capital, small business exporters, entrepreneurship, and minority- and women-owned businesses. Other Advocacy research addresses the concerns highlighted in Advocacy’s authorizing statute, such as examining the role of small business in the American economy, assessing the effectiveness of existing federal subsidies and assistance programs for small businesses, and evaluating efforts to assist small veteran-owned small business concerns. Advocacy follows a peer review process that was revised and formalized in 2013 and applies to all internal and external research products. Advocacy staff, including the Director of the Office of Economic Research, conduct peer reviews for internal products. For external research products, the peer review is initiated when Advocacy receives the first draft of the product, typically 6 to 8 months after the contract has been awarded, and generally includes external reviewers. In addition to the peer review process, all of Advocacy’s research products are required to pass Advocacy’s internal clearance process, which involves review by the Director of Economic Research, editors in Advocacy’s Office of Information, and individuals in Advocacy’s Office of Chief Counsel (Senior Advisor, Deputy Chief Counsel, and Chief Counsel). Advocacy’s Office of Interagency Affairs oversees the office’s regulatory activities, which aim to convey the views of small businesses on the impact of federal regulations and related costs. These activities generally fall into three categories—developing and issuing comment letters, convening information roundtables, and providing RFA training. Attorneys in this office (“regulatory attorneys”) are expected to become experts in the policy areas they oversee and to establish and maintain broad and effective networks of small business experts (e.g., trade associations) in their policy area. The regulatory attorneys are encouraged to go to trade association and other industry meetings in order to maintain and expand those networks. In addition to maintaining working relationships with industry members and experts, the staff are to establish and maintain relationships with the regulatory staff within each agency who write the rules. One of the primary ways Advocacy provides input to agencies that are issuing rules and regulations of concern to small businesses is through public comment letters. Our review of comment letters from fiscal years 2009 through 2013 found that they covered a wide range of rulemakings on issues such as food labeling, designations for critical habitat, and emission standards. Advocacy made a number of recommendations in its comment letters that included creating an exemption for small businesses or strengthening economic analyses required by the RFA. Advocacy also issued “nonrule” letters that involved the agencies’ other activities, such as agencies’ scientific research. These letters constitute a small proportion of Advocacy’s comment letters. Table 1 below shows the number of comment letters on rulemakings by fiscal year. Regulatory attorneys also convene information-gathering roundtables to discuss the regulatory concerns of small businesses. Roundtables are convened on a regular basis in two policy areas—environment and labor safety—while events covering other areas are convened on an ad hoc basis, depending on which regulatory or rulemaking issues might be forthcoming. According to the regulatory attorneys, the most typical reason for convening a roundtable was an upcoming rule or legislation that would impact small businesses. The regulatory attorneys use the information gathered from the roundtables, together with other information, to inform Advocacy’s positions on the issues involved, and to give Advocacy direction on proposed rules’ economic impacts and possible regulatory alternatives. The attorneys also told us roundtable discussions help them set priorities, and broaden their knowledge base. The roundtables sometimes resulted in a comment letter, although not always. Table 2 shows the number of roundtables by policy area for fiscal years 2009 through 2013. As discussed previously, Executive Order 13272 requires Advocacy to provide training to the agencies on how to comply with the RFA. According to data provided by Advocacy officials, in 2013, Advocacy staff provided training on the RFA to 159 officials at nine agencies and to 22 congressional staff. In addition to the formal training sessions, regulatory attorneys told us they were encouraged to interact regularly with the relevant rulemaking officials at the agencies as rules were developed in order to communicate the concerns of the small business advocates. Producing research products, both internally and externally, on issues of importance to small businesses is one of Advocacy’s primary responsibilities. However, we found that Advocacy’s quality review process lacked some key controls to substantiate the quality of the research and did not take steps to ensure that staff were adhering to existing controls. According to Advocacy officials, peer review is the main quality control over the research it disseminates. Advocacy’s current Chief Counsel recently directed the office to strengthen its peer review process with the intent to make it more rigorous and consistent. As a result, during the course of our review, Advocacy finalized a written peer review process. We found that the written guidance discussed the various levels of review for internal and external products as well as a process for initiating peer review. However, it did not specify how the economists who managed the research products were to identify peer reviewers. Instead, Advocacy officials told us that they relied on their own expertise and professional contacts to identify appropriate peer reviewers and provide recommendations through the Director of Research and to the Chief Counsel for Advocacy. The officials told us that within a specific subject matter there is often a small group of available peer reviewers, in part because their expertise is specialized and there are not many other alternatives. OMB’s peer review guidance calls on agencies to select peer reviewers with the appropriate knowledge and expertise and to take into account their independence and lack of conflicts of interest. Advocacy managers told us that, in practice, the economists recommend peer reviewers based on knowledge and experience in both subject matter and databases—as discussed in the OMB guidelines. However, they did not provide specific written guidance to the economists on how to identify peer reviewers. Federal internal control standards state that internal control activities help ensure that management’s directives are carried out and in implementing those standards management is responsible for developing the detailed policies, procedures and practices for their agency’s operations. With additional guidance, Advocacy would be in a better position to help ensure that the economists fully understand how best to identify qualified peer reviewers and carry out the Chief Counsel’s directive to improve the peer review process. Advocacy does not have consistent documentation showing whether a peer review occurred for all of its research products. Our review of 20 recent research products—10 internal and 10 external—revealed that 16 did not have documented peer reviews in the research files. According to interviews with Advocacy economists who managed the research, all of the 20 products underwent some form of peer review. The economists said that the type of review was commensurate with the methodological complexity of each product, among other factors. However, the Advocacy officials were unable to produce any documentation that peer reviews occurred for these 16 products. According to Advocacy’s peer review guidance, the economists who manage the research should document all correspondence pertaining to the peer review and maintain this documentation in the research file. In addition, federal internal control standards require all transactions and significant events be documented and that the documentation be readily available. Advocacy officials do not have procedures to review the external research files to ensure that the peer reviews were documented. Furthermore, they noted that for some of the less in-depth internal research products— typically 2 to 5 pages—such documentation would be administratively burdensome. However, we note that the documentation could be likewise concise, such as a checklist, or a form that reviewers sign, similar to the one currently used by Advocacy for its internal clearance process. Otherwise, absent written documentation, Advocacy managers are limited in their ability to conduct oversight and ensure that this key quality control activity is happening. For example, for one of the internal research products we selected for review, the study author no longer worked at Advocacy, and therefore no one could tell us with any certainty whether the required peer review had occurred, or who had participated in it. Without adequate documentation—a key internal control—of its peer reviews, Advocacy does not have an institutional record of its activities and cannot demonstrate that it is following its own peer review process. In addition, Advocacy has not consistently documented how peer reviewers’ comments were addressed by the authors of its external research products. Of the 10 external research files we reviewed, 4 had documentation that a peer review occurred, and 1 file included evidence that the peer reviewer comments were incorporated into the final report. Advocacy officials told us that the economists who managed external research consolidated the peer reviewers’ comments and forwarded to the author those that needed to be addressed, including methodological and data issues and other comments, but not those that might change the scope of the contracted research. They also told us that they did not typically maintain documentation showing which comments had been addressed and why, but included the final report in the research file. However, Advocacy’s peer review process states that the economist managing the research will analyze and incorporate, as needed, peer reviewers’ suggestions and maintain all related documentation in the research file. However, since the economists are not keeping records and documenting that comments have been considered and addressed, management does not have an institutional record to provide reasonable assurance that its quality control process is being followed. While Advocacy has quality review policies for its peer review process, it does not have policies and procedures that reflect the federal information quality guidelines on retaining data for influential studies or taking other steps to substantiate the quality of information in such studies when they have not retained the data. Advocacy officials told us that they did not retain the original data or underlying computer codes for three external studies on the costs of regulation as required by the information quality guidelines. We focused on external studies on estimating the costs of regulation because it is a key research area for Advocacy, according to its originating statute and the mission statement of its Office of Economic Research. The OMB Information Quality Guidelines require that all agencies producing and disseminating “influential statistical information” help ensure a high degree of transparency about data and methods to facilitate the reproducibility of such information by qualified third parties. The SBA guidelines implement this standard for transparency by requiring that the underlying data be stored and made available for public review for as long as the agency-disseminated information based on the data are valid. The guidelines also state that all formulas, calculations, matrixes, and assumptions used in processing the data should be available. Because Advocacy classified two of the regulatory cost studies as “influential” according to the OMB guidelines, those data should have been maintained. Advocacy officials said that they did not maintain the data or models for influential external research because there might be a cost associated with obtaining such data, which would raise the costs of the studies, possibly making them prohibitively expensive. However, in the case of two of the studies, the original data were from publicly available sources and involved a relatively small dataset, suggesting the cost would not have been prohibitive. The OMB guidelines state that sufficient transparency—achieved in part by storing the relevant data—results in analyses that can be substantially reproduced. Not retaining the underlying information for these influential research papers makes it much more difficult to assess the quality of that work, including its objectivity—a key goal of the information quality guidelines. We also found that Advocacy staff had not taken additional steps, in the absence of retaining the underlying data, to substantiate the quality of the regulatory cost estimates in two of the studies that it sponsored and disseminated. The OMB guidelines state that when data and methods are not retained and made available to the public because of other compelling interests such as privacy, trade secrets, intellectual property, and other confidentiality protections, the agency shall apply rigorous checks to the analytical results and document what checks were undertaken. Because Advocacy had not retained data on the two cost estimation studies that had been criticized, we interviewed senior Advocacy officials, including the Director of its Office of Economic Research, about the information and methodologies used in the studies. We asked them a set of questions related to criticisms of the methodologies, data, and models used in the studies that were identified in our evaluation and the work of other researchers. Advocacy staff declined to answer many of our questions and instead directed us to the authors, stating that they, not Advocacy economists, were the experts on the issues covered in the studies. However, the authors would not speak with us, stating that they were no longer contractually obligated to respond to our requests for information. During our discussions with Advocacy officials, they stated that the purpose of the studies was not to estimate the overall costs of regulations, but rather to estimate the disproportionate share borne by small businesses. They also noted that, as with all contract research, the external research reports contain a disclaimer indicating that the views presented did not necessarily represent those of Advocacy. In addition, Advocacy added language to the 2010 report’s cover page about the uncertainty of the authors’ estimates of the costs of regulation. Advocacy officials noted that the majority of the research it conducts is not classified “influential” according to the OMB and SBA guidelines, and that they have no plans to engage in such work in the near future. However, given Advocacy’s mission, it may do influential research in the future as it has in the past even if on a limited basis. We acknowledge that these reports may not necessarily be representative of all Advocacy’s research efforts, but not substantiating the quality of the information in even one study could call into question the credibility of Advocacy’s research program. Thus, establishing policies and procedures that reflect the federal information quality guidelines on retaining data for influential studies or when such data are not retained because of certain compelling interests, and requiring additional steps to substantiate the quality of the information, would put Advocacy in a better position to provide reasonable assurance about the quality of its research program. OMB Information Quality Guidelines require that agencies develop policies to ensure that managers be able to substantiate the quality of information they disseminate. The guidelines also discuss narrow circumstances under which an agency does not have to substantiate the quality of the information that resulted from a research project of one of its contractors or grantees. In those circumstances, the researcher is to make clear with an appropriate disclaimer that the views expressed in the research are his or her own and do not necessarily reflect those of the agency. However, OMB cautions that “if an agency, as an institution, disseminates information prepared by an outside party in a manner that reasonably suggests that the agency agrees with the information, this appearance of having the information represent agency views makes agency dissemination of the information subject to these guidelines.” As we noted previously, Advocacy placed a disclaimer on the two studies, but its actions indicate its agreement with the information in the studies. First, Advocacy made the two studies available on its website, where they continue to be available. Second, Advocacy’s disclaimer that appeared on the 2010 study noted that it contained information and analysis that was reviewed and edited by Advocacy. Advocacy’s description of its review role suggests that the agency contributed to the content, if not the conclusions of the study. Finally, several Advocacy comment letters have cited the 2005 study’s regulatory cost estimates in support of their arguments. Because Advocacy’s actions raise, at the least, an appearance of agreement with the information contained in the studies, we believe that the office was required to substantiate the quality of the estimates of the economic costs contained in the studies. Advocacy has practices and procedures (“policies”) for its regulatory activities—comment letters and roundtables—however, documentation of these key regulatory activities is inconsistent. Furthermore, while we determined that transparency and other requirements in FACA do not apply to Advocacy’s roundtables, Advocacy is not following its internal policies meant to ensure its roundtables are as open to the public as they could be. As a result Advocacy cannot demonstrate that it is always fully meeting its mission to foster two-way communication between small businesses and federal policymakers. Our review of Advocacy’s key regulatory activities—developing and issuing comment letters and convening information roundtables—found that Advocacy staff are inconsistently maintaining documentation of key decisions and events. In April 2014, Advocacy provided us with a “practices and procedures guide” for the Office of Interagency Affairs, dated March 2014. The guide covers staff activities related to initiating comment letters and convening roundtables, among other things. Advocacy management told us that they update the guidance periodically, including amending it during the course of our review to make it clear that the decision to issue a public letter or hold a roundtable rests with the Chief Counsel for Advocacy. However, our review found that the updated guide continued to lack policies for documenting key decisions and activities. For example, the guide stated that when regulatory attorneys decide to intervene in the rulemaking process, they must follow up, as appropriate, with the interested associations to ensure that Advocacy has sufficient information and data to make its case for intervening. But there is no policy to document these interactions. The following are other instances where we found a lack of policies to document key activities. Small Business Input into Comment Letters. Advocacy does not have any policies requiring that the regulatory attorneys retain documentation showing which entities provided input into comment letters, and we found that the attorneys do not consistently do so. Our analysis of Advocacy’s comment letters found that about 57 percent of the letters referenced small business input or concerns. But when we interviewed seven regulatory attorneys about how they developed comment letters generally, some told us that they maintained a record of the entities providing input for specific letters, but others said that they did not. Further, when we asked Advocacy staff for documentation on the sources of the small business input referred to in a nonrepresentative sample of 11 comment letters, they were unable to provide specific emails or notes of conversations. Reasons for Convening Roundtables. Advocacy’s practices and procedures guide states that the Chief Counsel must approve the proposed agenda, speakers, and discussion topics for all proposed roundtables before participants are invited. But the guide contains no policy that this step be documented. We interviewed seven regulatory attorneys responsible for the roundtables and their management and were told that the attorneys set agendas and selected speakers based on their own assessment of the issues and, in some cases, with suggestions from small businesses or interested industry parties. Advocacy officials emphasized that they only held roundtables when there is sufficient interest or need on the part of small businesses. Roundtable Discussions and Participants. Information gathered from the roundtables is used to inform Advocacy’s positions on issues related to small businesses and in comment letters, but Advocacy’s guidance contains no policies to document roundtable discussions. Our content analysis of Advocacy’s comment letters showed 19 percent of them referred to roundtable discussions. However, staff did not routinely take and maintain notes of the discussions, according to the interviews we conducted with the regulatory attorneys. In addition, not all staff take attendance at the events. In most cases, Advocacy staff keep an “RSVP list” of those who have indicated that they will attend. However, some of the staff noted that the RSVP lists may not include all participants, including those on the phone. As a result it is difficult to determine the extent to which small businesses and related entities were represented at these events. Advocacy officials told us that they did not have guidance on maintaining documentation on the sources of input to comment letters or on roundtable discussions because they did not feel that setting such standards was required to fulfill their duties under Advocacy’s statute. Furthermore, they noted that, in the case of roundtables, keeping records in a manner that identified specific speakers would inhibit discussion and limit their ability to gain valuable input. They also cited logistical challenges in taking accurate attendance at larger events. We acknowledge that specific parties might not want to be publicly named, but federal standards for internal control call for agencies to document significant events. Taking steps to balance the need for privacy so individuals can speak freely, with a commitment to maintain a basic level of documentation of these events—that is, documenting which entities provided input into its comment letters and roundtables—could help Advocacy demonstrate that it is meeting its mission to represent the interests of small businesses. Key documents for Advocacy’s roundtables—agendas and presentation materials—are not made available to the public at-large after the fact. The regulatory attorneys cultivate email lists of relevant stakeholders who are invited to roundtable events, and these lists are continually updated. It is Advocacy’s policy to add any interested parties to the invitation list, if asked, and several of the attorneys we interviewed said they did so. They also told us they made agendas and presentations available to any interested parties who requested them after the roundtable. However, the agendas and presentations are not posted to the website or made publically available in any systematic way and if small businesses and other interested parties do not know about the roundtables, they cannot request information from the events. Advocacy’s policies and procedures state that agendas and presentations should be posted on Advocacy’s website. Advocacy officials stated they had been unable to post roundtable materials to the website, which SBA maintains, because of difficulties in meeting certain readability and accessibility requirements in the Americans with Disabilities Act. However, a variety of other Advocacy reports and publications are posted on its website that also must meet these requirements, and Advocacy officials did not explain why information on the roundtables could not be likewise included. Making the roundtable materials available on its website would strengthen Advocacy’s efforts to inform small businesses and other interested parties about its efforts to represent their interests to federal decision makers. In addition to evaluating whether Advocacy publicizes and conducts its roundtables in accordance with its internal policies and procedures, we also analyzed whether Advocacy’s roundtable groups constitute “advisory committees” subject to the public notice and other transparency requirements of FACA. We found that the roundtables are not advisory committees, and thus Advocacy is not required by law to follow these rules. We first determined that Advocacy is an “agency” covered by FACA. It is an “authority of the Government of the United States, whether or not it is within or subject to review by another agency.” In addition, Advocacy possesses the type of “substantial independent authority” required by the courts. While located within SBA, Advocacy is independent of SBA, and it has distinct statutory authorities and responsibilities, a separate statutory charter, and an appropriation account that is separate from the rest of SBA. Furthermore, its duties and authorities under statute and Executive Order are substantial and its recommendations made in furtherance of small entities’ concerns must, by law, be given considerable weight by other agencies. While we found that Advocacy is a FACA “agency,” we concluded that its roundtable groups are not “advisory committees” as defined by the statute and interpreted by the courts and implementing regulations. A covered advisory committee is a panel, task force, or similar group created or used by an agency for the purpose of providing advice or recommendations on particular matters. Participants’ input must be sought as a group, not as a collection of individuals. The formality and fixed structure of the group also is an important factor in determining coverage under FACA. Advocacy does not seek roundtable participants’ input as a group, however; rather, attendees provide their individual perspectives on the agency rule or policy under discussion, do not significantly interact with one another, and no attempt is made to reach a consensus. Advocacy’s roundtables also do not have the requisite organized or fixed structure; rather, as noted, Advocacy’s policy is to extend roundtable invitations to anyone who expresses a desire to attend. Finally, according to Advocacy, it is the agency itself, not the roundtable participants, that develops policy advice and recommendations, albeit based in part on the data and information provided by roundtable participants (as well as obtained elsewhere). Our detailed legal analysis of the applicability of FACA to Advocacy’s roundtables is contained in appendix III. Workforce planning presents some challenges for Advocacy, in part because the office is small and has a large number of positions, which, according to Advocacy officials, typically are replaced with each new administration. While the office currently does some workforce planning, its efforts do not include the long-term strategic plans that would help ensure that it maintains the expertise and skills needed to fulfill its mission. While Advocacy now has responsibility for developing its own budget, goals, and performance measures, our review of its strategic plan and goals did not find goals or objectives that discussed workforce issues such as staff development and succession planning. Advocacy officials told us that they used their organizational chart as the basis for their workforce planning efforts and discussed workforce issues with staff and managers at meetings as needed (see fig. 2). Advocacy officials said that because Advocacy was a small office with only 46 staff, the organizational chart and meetings to discuss staffing issues met their workforce planning needs. Advocacy officials told us that to develop staff they provided training to regulatory attorney and economist staff, had established a mentoring program for newer staff, and were implementing a knowledge-sharing database to help staff develop expertise in specialized policy areas. When we asked Advocacy officials about succession planning and their plans for addressing the departure of senior or experienced staff, the officials told us that most of the turnover in staff occurred during changes in administration. For example, according to Advocacy officials, the change of administration in 2009 resulted in the turnover of at least 15 of 20 senior officials. Among these were the 10 regional advocates, who are located in each of SBA’s regional offices across the country, and make up the Office of Regional Affairs. Turnover tends to be low among the economists and regulatory attorneys, some of whom have been with the office for years. Advocacy officials said that because rulemaking often took so long, they generally had enough time to hire or realign any staff as needed. According to federal internal control standards, effective management of a workforce is essential to achieving program results. Our body of work on workforce planning has demonstrated the importance of such planning and the need to develop long-term strategies—such as training and succession planning—for acquiring, developing, and retaining staff to achieve programmatic goals. OPM’s Human Capital Assessment and Accountability Framework states that agencies and offices with workforce planning are better able to manage their staff by, for example, ensuring that systematic processes are in place for identifying and addressing any gaps between current and future workforce needs. Further, OPM recommends that succession planning is needed to ensure continuity in leadership positions. In addition, workforce planning can help management determine the type of training and other strategies that are needed to address factors such as projected retirements and succession planning. In past work on succession planning, we have found that in addition to focusing on replacing individuals, succession planning strategies can also strengthen current and future organizational capacity. Given the length of time some attorneys and economists have been with Advocacy, the loss of their expertise through retirement, among other things, could leave significant gaps in needed skills and knowledge, according to Advocacy officials. Yet Advocacy has not incorporated succession planning into its workforce planning efforts, such as its training and mentoring initiatives. Further, Advocacy officials told us that they discussed future staffing needs and various options for addressing them, including training, but these efforts are not documented in a manner that would ensure consistent implementation. However, ensuring that activities such as staff training are consistently implemented is particularly important when senior management can change significantly every 4 or 8 years. Without incorporating long-term succession planning into its workforce planning efforts, Advocacy is not in the best position to ensure that it has qualified staff to fill leadership and other key positions and a skilled workforce able to meet the demands of its mission on behalf of small businesses. Effective internal controls are critical for the Office of Advocacy if it is to achieve program outcomes and minimize operational problems. Recognizing this, Advocacy recently has taken action to improve some of its guidance and controls. However, Advocacy’s research, regulatory, and workforce planning functions could be improved by strengthening its internal controls as follows: Research activities. In its research operations, Advocacy has taken some initial steps toward establishing stronger control policies by, for example, formalizing its peer review process for internal and external research products. However, the guidance provided did not include information on how to select appropriate peer reviewers—the experts whom Advocacy relies on to assess the quality of the research that it disseminates. The guidance also lacks policies for documenting that a peer review occurred and how reviewer comments were addressed for its external research products. Such guidance would help ensure that Advocacy research staff fully understand how best to identify the most qualified peer reviewers, and how to document and incorporate reviewer comments. In addition, Advocacy did not follow federal guidelines for information quality for influential studies that set out requirements for retaining data and taking additional steps to substantiate the quality of information it disseminates when it has certain compelling reasons to not retain the data. Improving policies and procedures for its research activities would help support Advocacy’s mission to provide quality research on small business issues that decision makers and the public can rely on. Regulatory activities. Advocacy’s lack of documentation and transparency of the regulatory activities we reviewed made it difficult to validate Advocacy’s efforts to represent small businesses. Specifically, weaknesses exist in Advocacy’s documentation of both the sources of the small business input in comment letters and the views of small businesses discussed and conveyed at its roundtables. As a result, the extent to which the small businesses’ views on regulations were being obtained and communicated is unknown. Improving its guidance to staff on its regulatory activities and emphasizing the importance of documentation would enable Advocacy to more effectively demonstrate to decision makers that it was obtaining and communicating the interests of small businesses. In addition, Advocacy is not following its internal policy to post materials from roundtables on its website. As a result, it is missing an opportunity to reach out more broadly to small businesses and other interested parties and increase the transparency of its activities. Workforce planning activities. While Advocacy’s workforce planning efforts help in managing its current staff, these efforts do not include any strategies to plan for succession, even though several staff have been with the office for many years and will eventually need to be replaced. Without this important element of workforce planning, Advocacy could be in a vulnerable position when critical staff leave the agency or staff face new demands. Although Advocacy is a relatively small office, having a skilled workforce is critical to meeting its mission. Succession planning would help ensure that Advocacy was better prepared to maintain qualified staff to conduct its research and regulatory activities in support of small businesses. To improve Advocacy’s system of internal control, and help to provide reasonable assurance that the office is meeting its mission, we recommend that the Chief Counsel of Advocacy take the following five actions: Strengthen the accountability of its research activities by taking the following two actions: Enhance its peer review policies and procedures by including written guidance on selecting peer reviewers and documenting all steps of the peer review process—whether a peer review occurred and how and to what extent peer reviewer comments were addressed. Develop policies and procedures that reflect the federal information quality guidelines on retaining data for influential studies, and when not retaining data, taking additional steps to substantiate the quality of information disseminated. Strengthen the accountability of its regulatory activities by developing policies and procedures to ensure that key elements of that work— such as sources of input for comment letters and roundtable discussions—are consistently documented. Coordinate with SBA officials who oversee website administration to comply with Advocacy’s roundtable policy to make information on the events—agendas, presentation materials—publicly available on its website so that its regulatory activities are more transparent to the public. Improve its workforce planning efforts to be better prepared to meet its future workforce needs by incorporating succession planning. We provided a draft of this report to Office of Advocacy for its review and comment. In its written comments (reproduced in app. II), Advocacy agreed with our recommendations. Advocacy stated that for its research activities, its current effort to further formalize procedures for the peer review process will include steps for additional documentation and that it also plans to develop written guidelines for determining which research products are considered influential, which will clarify when Advocacy needs to take additional information quality steps. Advocacy also said that as it develops how it will disseminate information about its regulatory activities, it will incorporate approaches that are responsive to our recommendations. Finally, Advocacy agreed that workforce planning is important for ensuring that the office maintains the skills and resources needed to fulfill its mission and noted that the office is developing a Leadership Succession Plan in response to our recommendation. We are sending this report to the Office of Advocacy and interested congressional committees. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Cindy Brown Barnes at (202) 512-8678 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report examines Advocacy’s (1) research activities; (2) regulatory activities, including the applicability of the Federal Advisory Committee Act (FACA) to Advocacy’s roundtables; and (3) workforce planning efforts. The scope of our work covered key activities conducted from fiscal year 2008 to fiscal year 2013, although in some cases, we broadened that scope to include fiscal years 2005 through 2008, as noted below. To understand Advocacy’s mission, operations, and participation in the rulemaking process, we reviewed relevant laws and regulations, Advocacy documents, including its annual reports to Congress on the Regulatory Flexibility Act (RFA), budget and strategic planning documents, and other publications. To assess Advocacy’s research activities, we determined: how Advocacy staff chose its research topics; how they conducted the research; and what controls they had in place to ensure quality products. We analyzed Advocacy’s research products produced in the 5 most recent fiscal years—2008 through 2012—and interviewed relevant Advocacy officials on the processes by which they assess the quality of its research products. To evaluate Advocacy’s research activities, we analyzed a nonprobability sample of research, which included 10 research products authored by Advocacy staff (internal), and 10 products produced by its contractors (external). To select a sample of internal research products that best represented the variety of internal research, we reviewed the 58 research products issued from fiscal years 2008 through 2012, and categorized each as either “routine” (meaning published either quarterly or semi-annually, and relying on the same underlying data set), or as “nonroutine.” Within the routine category, we selected the mostly recently published product from each of the following routine publications: (1) Small Business Profiles of the States and Territories, (2) Small Business Quarterly Bulletin, (3) Quarterly Lending Bulletin, (4) Small Business Lending, and (5) The Small Business Economy. For the nonroutine products, we created five categories, such as “analysis” and “fact sheets.” We selected, from each of those five categories, the most recently published report. Then, for each of the 10 selected internal research products, we used a data collection instrument and interviewed the Advocacy staff who wrote the products in order to see to what extent the products adhered to Information Quality Guidelines issued by the Office of Management and Budget (OMB) and the Small Business Administration (SBA) and Advocacy’s internal quality review process (“peer review”). To assess the quality of the peer review used for Advocacy’s external research, we reviewed the contract files for the 10 most recently published external research products, as of year-end 2013, and compared how the work was produced, including Advocacy’s peer review process, against the OMB Peer Review Guidance and OMB and SBA information quality guidelines, and applicable federal internal control standards. The results from our reviews cannot be projected to all Advocacy studies, but they provide an indication of how Advocacy staff conducted or oversaw the research and what controls were in place to ensure quality products. We also assessed Advocacy’s contract studies that focused on the economic costs of regulation. We focused on external studies in these areas because they are key research areas for Advocacy, according to its originating statute and the mission statement of its Office of Economic Research. For the economic costs studies, we assessed them against OMB and SBA Information Quality Guidelines, and examined Advocacy’s compliance with related data retention policies therein. We also reviewed peer reviewers’ comments and other external reviews on the studies as part of our assessment. Lastly, we interviewed Advocacy officials to obtain information on steps they took to substantiate the quality of information for influential studies when data are not retained. To evaluate Advocacy’s regulatory activities, we assessed: how and why Advocacy decides to issue comment letters and convene roundtables; the policies and practices in place that pertain to those activities; how Advocacy staff solicit input from small businesses and other parties. We interviewed relevant staff, as described below; reviewed relevant policies and procedures, and analyzed comment letters produced and roundtables convened from fiscal years 2009 through 2013. Specifically, we performed a content analysis on the 181 comment letters issued during that period, using NVivo software, to analyzed and categorize the content of the letters and the nature and source of the small business input provided. We also interviewed Advocacy staff who are responsible for rulemaking and related activities, weighting our choices of interviewees to reflect the distribution of comment letters by policy area, and reviewed supporting documentation for comment letters to understand how Advocacy staff develop and issue comment letters, and compared this information to federal internal control standards. Similarly, we reviewed available information on the 142 roundtables convened during the same 5-year period and interviewed the responsible Advocacy staff, weighting our choices of interviewees to reflect the distribution of roundtables by policy area. We reviewed Advocacy’s policies for roundtables to understand their origin, purpose, and documentation requirements, and compared this information to federal internal control standards. In order to understand the perspective of those who attended the roundtables, we attended three of the events. In addition, we selected and interviewed a nonprobability sample of past participants from email lists provided by Advocacy’s regulatory attorneys. Specifically, we talked to the following representatives: five from industry associations that represent small and large businesses; one from a large corporation; and one nonprofit organization whose mission relates to policies or rules under consideration. We made our selections to include representatives from a variety of sources, and while the results from our interviews cannot be projected to all entities that interact with Advocacy, the information we gathered does provide insights into how the selected groups view Advocacy and its work in representing small businesses to federal policymakers. We also reviewed Advocacy’s training to agency staff in how to comply with the RFA and interactions with agency rulemaking officials. In addition, we interviewed officials from the entities that interact with Advocacy in rulemaking—the Environmental Protection Agency (EPA), Occupational Safety and Health Administration (OSHA), Bureau of Consumer Financial Protection, also known as the Consumer Financial Protection Bureau (CFPB), and Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA). To analyze whether FACA (5 U.S.C. App. II) applies to Advocacy’s roundtables, we reviewed relevant statutes, case law, regulations, and guidance. In addition, we reviewed and considered Advocacy’s written views on the issue. See appendix III for more information on the legal analysis we conducted. Finally, we reviewed Advocacy’s workforce efforts. To understand Advocacy’s workforce planning, we reviewed Advocacy’s strategic goals and other planning and budget documents and interviewed senior management to determine what, if any, policies and procedures were in place related to workforce and succession planning. We assessed any such policies and procedures against applicable federal standards for internal control and the Office of Personnel Management’s (OPM) Human Capital Assessment and Accountability Framework (HCAAF). We also reviewed GAO reports on workforce and succession planning to gain insights about key practices and how agencies have used them. We conducted this performance audit from August 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Advocacy is an “agency” under FACA first because it meets the statutory definition of this term in FACA and the Administrative Procedure Act incorporated by reference: it is an “authority of the Government of the United States, whether or not it is within or subject to review by another agency” and not subject to statutory exemption. In addition, Advocacy has the type of “substantial independent authority” and other indicia of “agency” status that courts interpreting the APA definition have required. However, Advocacy’s roundtable groups are not FACA “advisory committees” because they do not have an organized or fixed structure; there is no attempt to reach consensus about an agency regulation or policy but instead viewpoints are sought from individual participants; and it is only data and information, according to Advocacy, rather than advice or policy recommendations, that participants provide. Cindy Brown Barnes (202-512-8678), brownbarnesc@gao.gov. In addition to the contact named above, Kay Kuhlman and Triana McNeil (Assistant Directors), Kristeen McLain (Analyst-in-Charge), Emily Chalmers, William Chatlos, John McGrail, Lauren Nunnally, Roberto Pinero, Jena Sinkfield, Jack Wang, and Carrie Watkins made key contributions to this report.
Congress created Advocacy within SBA in 1976 as an independent voice for small businesses. Questions have recently been raised about Advocacy's efforts to represent small businesses in regulatory activities and some of its research on small business issues. In light of these questions, GAO was asked to review Advocacy's activities. This report examines Advocacy's (1) research, (2) regulatory activities, and (3) workforce planning efforts. GAO analyzed Advocacy's research, comment letters, and other regulatory information for fiscal years 2008-2013; assessed Advocacy's policies and procedures against federal standards for internal control and information quality; and interviewed agency officials, and small business and industry representatives. GAO also analyzed the applicability of the Federal Advisory Committee Act to Advocacy roundtables. The Office of Advocacy (Advocacy) within the Small Business Administration (SBA) fulfills its mission by researching small business issues and providing input into federal rulemaking and related regulatory activities. However, GAO identified key areas in Advocacy's system of internal control that could be improved. Research. GAO found that Advocacy did not ensure that its staff monitored the quality of the information the office disseminated, as required. GAO reviewed 20 selected research products and found that in 16 cases a required quality review had not been documented. Advocacy recently established a review policy for its research, but it does not include procedures for selecting the reviewers or documenting that a review occurred and how reviewer comments are addressed. GAO also found that Advocacy staff had not followed federal information quality guidelines to retain data and could not substantiate the quality of information in two cost-estimation reports—a research product it has contracted for every 5 years. Without better controls over its quality review process and efforts to substantiate the information it disseminates, Advocacy cannot ensure the validity of one of its core activities—research in support of small businesses. Regulatory activities. Advocacy recently updated procedures for its regulatory activities, but these could be strengthened. GAO found the extent to which individual staff maintained records varied, in part, because the procedures lacked policies for documentation. For instance, the procedures state that when staff decide to intervene in the rulemaking process, they must follow up as appropriate with the interested groups to ensure that Advocacy has sufficient information and data to support its case. However, there is no policy that these interactions be documented. Federal internal control standards state that documentation and records should be maintained. If key procedures are not being documented, managers do not have an institutional record that agency goals and objectives in this area are being met. GAO also found that the Federal Advisory Committee Act's transparency and other requirements do not apply to Advocacy's meetings with stakeholders to get input on regulations (roundtables). Workforce planning. Advocacy's workforce efforts include training and mentoring for new staff, but do not include succession planning, which is recommended by the Office of Personnel Management. According to federal internal control standards, effective management of a workforce is essential to achieving program results. Officials told GAO that Advocacy was a small office and that additional staff were hired on an as-needed basis. However, some key staff have been with Advocacy for many years and their experience will be difficult to replace. If Advocacy does not incorporate succession planning strategies into its workforce planning efforts, it is at risk of not having the skills and expertise to meet its mission when key staff leave or retire. GAO makes several recommendations to improve the Office of Advocacy's controls over the quality of its research, the documentation of its regulatory activities, and workforce planning. In commenting on a draft of this report, Advocacy agreed with our recommendations and noted some steps it will take to address them.
Roughly half of all workers participate in an employer-sponsored retirement, or pension, plan. Private sector pension plans are classified as either defined benefit or defined contribution plans. Defined benefit plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement, regardless of how the plan’s investments perform. In contrast, benefits from defined contribution plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. The Employee Retirement Income Security Act of 1974 (ERISA) establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing and reporting plan fees. Typically, the plan sponsor is a fiduciary. A plan fiduciary includes a person who has discretionary authority or control over plan management or any authority or control over the management or disposition of plan assets. ERISA requires that plan sponsors responsible for managing employee benefit plans carry out their plan responsibilities prudently and solely in the interest of the plan’s participants and beneficiaries. Plan sponsors, as fiduciaries, are required to act on behalf of plan participants and their beneficiaries. These responsibilities include selecting and monitoring service providers to the plan, reporting plan information to the government and to participants, adhering to the plan’s investment policy statement and other plan documents (unless inconsistent with ERISA), identifying parties-in-interest to the plan and taking steps to monitor transactions with them, selecting investment options the plan will offer and diversifying plan investments, and ensuring that the services provided to their plan are necessary and that the cost of those services is reasonable. Plan sponsors may receive some information on an investment option’s expenses that includes management fees, distribution and/or service fees, and certain other fees, such as accounting and legal fees. These fees are usually disclosed in the fund’s prospectus or fund profile. To better enable the agency to effectively oversee 401(k) plan fees, we recommended in November 2006 that the Secretary of Labor should require plan sponsors to report to Labor a summary of all fees that are paid out of plan assets or by participants. This summary should list fees by type, particularly investment fees that are being indirectly incurred by participants. In addition to receiving information about investment fees, sponsors may receive information about expenses for administration and other aspects of plan operations. Sponsors can also have providers fill out the Form 5500, which ultimately gets filed with Labor, and includes information about the financial condition and operation of their plans. Generally, information on 401(k) expenses is reported on two sections of the Form 5500, Schedule A and Schedule C. However, our November 2006 study reported that the form is of little use to plan sponsors and others in terms of understanding the cost of a plan. While plan sponsors may receive information on investment and other fees, they may not be receiving information on certain relevant business arrangements. In November 2006, we reported that several opportunities exist for such business arrangements to go undisclosed, given the various parties involved in creating and administering 401(k) plans. Problems may occur when pension consultants or other companies providing services to a plan also receive compensation from other service providers. Service providers may be steering plan sponsors toward investment products or services in which they have a direct business interest themselves without disclosing such arrangements. In addition, plan sponsors, being unaware, are often unable to report information about these arrangements to Labor on Form 5500 Schedule C. Our November 2006 report also recommended that Congress consider amending ERISA to require that service providers disclose to plan sponsors the compensation that providers receive from other service providers. In our prior report on 401(k) fees, we found that the fee information that ERISA requires 401(k) plan sponsors to disclose is limited and does not provide participants with an easy way to compare investment options. All 401(k) plans are required to provide disclosures on plan operations, participant accounts, and the plan’s financial status. Although they often contain some information on fees, these documents are not required to disclose the fees borne by individual participants. Overall, we found that the information currently provided to participants does not provide a simple way for them to compare plan investment options and their fees, and are provided to participants in a piecemeal fashion. Additional fee disclosures are required for certain—but not all—plans in which participants direct their investments. ERISA requires disclosure of fee information to participants where plan sponsors seek liability protection from investment losses resulting from participants’ investment decisions. Such plans—known as 404(c) plans—are required to provide participants with a broad range of investment alternatives, descriptions of the risks and historical performance of such investment alternatives, and information about any transaction fees and expenses in connection with buying or selling interests in such alternatives. Upon request, 404(c) plans must also provide participants with, among other information, the expense ratio for each investment option. Plan sponsors may voluntarily provide participants with more information on fees than ERISA requires, according to industry professionals. For example, plan sponsors that do not elect to be 404(c) often distribute prospectuses or fund profiles when employees become eligible for the plan, just as 404(c) sponsors do. Still, absent requirements to do so, some plan sponsors may not identify all the fees participants pay. Some participants may be able to make comparisons across investment options by piecing together the fees that they pay, but doing so requires an awareness of fees that most participants do not have. Assessing fees across investment options can be difficult for participants because the data are typically not presented in a single document that facilitates comparison. However, most 401(k) investment options have expense ratios that are provided in prospectuses or fund profiles and can be compared; based on industry data, expenses for the majority of 401(k) assets, which are in investment options such as mutual funds, can be expressed as an expense ratio. Plan sponsors, as fiduciaries, must consider plan fee information related to a broad range of functions. According to Labor, ERISA requires that sponsors evaluate fee information associated with the investment options offered to participants and the providers they hire to perform plan services and consider the reasonableness of the expenses charged by the various providers of services to the plan. In addition, the sponsor must understand information concerning certain arrangements, such as when a service provider receives some share of its revenue from a third party. While industry professionals might agree about some of the information that sponsors need, they disagree about how much information is needed about individual expense components when a package of plan services, known as a bundled arrangement, is sold to a sponsor for a single price. Some pension plan associations and practitioners have made various suggestions to help plan sponsors collect meaningful information on expenses. Labor has also undertaken a number of activities related to the information on plan expenses that sponsors should consider. In order to carry out their duties, plan sponsors have an obligation under ERISA to prudently select and monitor plan investment options made available to the plan’s participants and beneficiaries and the persons providing services to the plan. Understanding and evaluating the fees and expenses associated with a plan’s investments and services are an important part of a fiduciary’s responsibility. Plan sponsors need to monitor the fees and expenses associated with the plan’s investment options and the services provided by outside vendors, including any revenue sharing arrangements, to determine whether the expenses continue to be reasonable for the services provided. Industry experts have suggested that plan sponsors be required to obtain complete information about investment options before adding them to the plan’s menu and obtain information concerning arrangements where a service provider receives some share of its revenue from a third party. A number of associations recently put together a list of service- and fee- related data elements they believe defined contribution plan sponsors and service providers should discuss when entering into agreements. The data elements include such information as payments received by plan service providers from affiliates in connection with services to the plan, float revenue, and investment-related consulting services. The list is meant as a reference tool for plan sponsors and providers to use to determine the extent to which a service provider receives compensation in connection with its services to the plan from other service providers or plan investment products (e.g., revenue sharing or finders’ fees). According to the associations that formulated this tool, the information can aid plan sponsors to evaluate any potential conflicts of interest that may arise in how fees are allocated among service providers. In our prior work, we noted that plan sponsors may not have information on arrangements among service providers that, according to Labor officials, could steer plan sponsors toward offering investment options that benefit service providers but may not be in the best interest of participants. For example, the Securities and Exchange Commission (SEC) released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. In addition, specific fees that are considered to be “hidden” may mask the existence of a conflict of interest. Hidden fees are usually related to business arrangements where one service provider to a 401(k) plan pays a third-party provider for services, such as record keeping, but does not disclose this compensation to the plan sponsor. The problem with hidden fees is not how much is being paid to the service provider, but with knowing what entity is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. While there is general agreement that understanding the fees and expenses associated with a plan’s services is an important part of a fiduciary’s responsibility, pension professionals disagree about how much information is needed about the expense components of bundled fee arrangements. One representative speaking on behalf of five industry associations stated he did not believe that the requirement to “unbundle” bundled services and provide individual costs in many detailed categories was particularly helpful because the information provided would not be very meaningful and the costs of providing this information would ultimately be passed on to plan participants through higher administrative fees. He also raised concerns about how a service provider would disclose component costs for services that are not offered outside a bundled contract. In addition, he said that posting such information could force public disclosure of proprietary information regarding contracts between service providers and plan sponsors. Finally, he stated that as long as they are fully informed of the services being provided, many plan sponsors might prefer reviewing aggregate costs so that they can compare and evaluate whether the overall fees are reasonable without analyzing each itemized fee. On the other hand, a representative of another pension association contended that it is possible with very little cost to develop an allocation methodology to provide a reasonable breakdown of fees for plan services. He believes that not disclosing component pricing provides a competitive advantage, enabling bundled providers to tell plan sponsors that they can offer certain retirement plan services for free—when fees are deducted from investment returns—while unbundled providers are required to disclose the fees for the same services. He further stated that any disclosure requirements should apply uniformly to all service providers. In his view this would allow plan fiduciaries to assess the reasonableness of fees by comparison and thereby allow fiduciaries to determine whether certain services are needed, which could lead to lower fees. Industry professionals have suggested that, before hiring a service provider or adding investment options to the plan’s menu, plan sponsors should obtain complete fee information, including information concerning arrangements in which a service provider receives some share of its revenue from a third party. Pension plan associations and practitioners have made various suggestions to help plan sponsors collect meaningful information on expenses. In 2004 the ERISA Advisory Council on Employee Welfare and Pension Benefit Plans created a Working Group to study retirement plan investment management fees and expenses as they were currently reported to Labor. In addition to issues related to annual reporting, the Working Group was also interested in determining whether plan sponsors currently receive adequate data from the service providers in order to both understand and report fees. In its final report, the Working Group made the following recommendations, among others, in an effort to further educate plan sponsors and fiduciaries about plan fees: Plan sponsors should avoid entering transactions with vendors who refuse to disclose the amount and sources of all fees and compensation received in connection with plan. Plan sponsors should require plan providers to provide a detailed written analysis of all fees and compensation (whether directly or indirectly) to be received for its services to the plan prior to retention. Plan sponsors should obtain all information on fees and expenses as well as revenue sharing arrangements with each investment option. Plan sponsors should also determine the availability of other mutual funds or share classes within a mutual fund with lower revenue sharing arrangements prior to selecting an investment option. Plan sponsors should require vendors to provide annual written statements with respect to all compensation, both direct and indirect, received by the provider in connection with its services to the plan. Plan sponsors need to be aware that with asset-based fees, fees can grow just as the size of the asset pool grows, regardless of whether any additional services are provided by the vendor, and as a result, asset- based fees should be monitored periodically. Plan sponsors should calculate the total plan costs annually. More recently in 2007, one witness before the ERISA Advisory Council recommended further that plan sponsors should evaluate fees associated with three categories of services: Net investment expenses would not only include investment expenses, such as the expense ratio of a mutual fund, but would also subtract any fees or commissions paid to a broker, consultant, or advisor for services in the categories below. Administrative expenses would include specific charges for operational services, such as record keeping, administration, compliance, and communication, as well as revenue sharing or other payments from investments. Advisory expenses would include amounts paid directly by the plan to consultants, advisors, or brokers, as well as indirect payments from sources such as investments or related companies. In addition, some industry professionals believe that plan sponsors, as they monitor investment alternatives, should review investment alternative results against appropriate benchmarks and compare their plans’ options to competing funds with similar investment goals. A benchmark is used to compare specific investment results with that of the market or economy. Industry professionals also noted that although there are appropriate benchmarks for mutual funds, benchmarks are not as readily available for other types of investment products. According to one industry professional that we spoke with, plan sponsors do not have good benchmarks to assess the reasonableness of investment options’ expense ratios. Only limited information is available, and a national database of funds and their expense ratios does not exist. He further stated that without such a source, selecting which funds constitute a meaningful comparison set is not an easy task, and may be open to interpretation. Disclosure encourages price competition, but in his opinion, because of the lack of available information, the 401(k) market is relatively ineffective at fostering price competition. Labor, in its comments on our November 2006 report, stated that the agency has proposed a number of changes to the Form 5500, including changes that would expand the information required to be reported on the Schedule C. The changes are intended to assist plan sponsors in assessing the reasonableness of compensation paid for services and potential conflicts of interest that might affect those services. According to testimony earlier this month from the Assistant Secretary of Labor, the agency will be issuing a final regulation requiring additional public disclosure of fee and expense information on the Form 5500 within the next few weeks. This change will be helpful to plan sponsors as they look retrospectively at the preceding plan year. In addition, Labor was considering an amendment to its regulation under section 408(b)(2) of ERISA, expected to be issued this year. This amendment would help to ensure that plan sponsors have sufficient information on the compensation to be paid to the service provider and the revenue sharing compensation paid by the plan for the specific services and potential conflicts of interest that may exist on the part of the service provider. Labor’s ERISA Advisory Council currently has a working group focusing on fiduciary responsibility and revenue sharing. One area of focus is what service providers should be required to provide when they enter into a revenue sharing or rebate arrangement. Labor also provides a model form on its Web site specifically designed to assist plan fiduciaries and service providers in exchanging complete disclosures concerning the costs involved in service arrangements. Other associations and entities continue to develop model fee disclosure forms for plan sponsors. We are currently conducting work in the area of 401(k) plan sponsor practices, identifying how plan sponsors decide which features to include in the plans they establish and how plan sponsors oversee plan operations. Part of our work will consider how plan sponsors monitor the fees charged to their plans. We expect to issue a report in 2008. Before making informed decisions about their 401(k) plan investments, participants must first be made aware of the types of plan fees that they pay. For example, according to one nationwide survey, some participants do not even know that they pay plan fees. In 2006, we reported that investment fees constitute the majority of fees in 401(k) plans and are typically borne by participants. Most industry professionals agree that information about investment fees—such as the expense ratio, a fund’s operating fees as a percentage of its assets—is fundamental for plan participants. Participants also need to be aware of other types of fees— such as record-keeping fees and redemption fees or surrender charges imposed for changing or selling investments—to gain a more complete understanding of all the fees that can affect their account balances. Whether participants receive only basic expense ratio information or more detailed information on various fees, presenting the information in a clear, easily comparable format can help participants understand the content of the disclosure. Currently, most participants are responsible for directing their investments among the choices offered by their 401(k) plans, but may not be aware of the different fees that they pay. According to industry professionals, participants are often unaware that they pay any fees associated with their 401(k) plan. In fact, studies have shown that 401(k) participants often lack the most basic knowledge—that there are fees associated with their plan. When asked in a recent nationwide survey whether they pay any fees for the 401(k) plan, as figure 1 shows, 65 percent of 401(k) participants responded that they do not pay fees. Seventeen percent said they do pay fees, and 18 percent stated that they do not know. When this same group was asked how much they pay in fees, as shown in figure 2, 83 percent reported not knowing. Although it is clear that participants require fee information to make informed decisions, it is not so clear what fee information is most relevant. In 2006, we reported that investment fees constitute the majority of fees in 401(k) plans and are typically borne by participants. Investment fees are, for example, fees charged by companies that manage a mutual fund for all services related to operating the fund. These fees pay for selecting a mutual fund’s portfolio of securities and managing the fund; marketing the fund and compensating brokers who sell the fund; and providing other shareholder services, such as distributing the fund prospectus. These fees are charged regardless of whether the mutual fund or other investment product, such as collective investment funds or group annuity contracts, is part of a 401(k) plan or purchased by individual investors in the retail market. As such, the fees are usually different for each investment option available to participants in a 401(k) plan. In our previous report, we recommended that Congress consider amending ERISA to require all sponsors of participant-directed plans to disclose fee information on 401(k) investment options to participants in a way that facilitates comparison among the options, such as via expense ratios. As mentioned earlier, there have been at least two bills recently introduced in Congress on the subject. Industry professionals have also suggested that comparing the expense ratio across investment options is the most effective way to compare options’ fees. They generally agree that an expense ratio provides valuable information that participants need and can be used to compare investment options because it includes investment fees, which constitute most of the total fees borne by participants. According to an industry official, the disclosure of expense ratios might include a general description of how expense ratios vary depending on the type and style of investment. For example, investment options with relatively high fees, such as actively managed funds, tend to have larger expense ratios than funds that are not actively managed. Also, investment options that are only available to institutional investors tend to have lower expense ratios than other types of funds. Most of the investment options offered in 401(k) plans have expense ratios that can be compared, but this information is not always provided to participants. In addition, investment options other than mutual funds may not be required to produce prospectuses that include expense ratios, but according to industry professionals, most options have expense ratio equivalents that investment industry professionals can identify. Industry professionals also believe that participants need information on other fees that are not included in the expense ratio but still affect their account balances. For example, annual fees or fees on a per transaction basis that can be deducted from account balances should be disclosed, such as administrative and record-keeping fees, participant loan origination fees, and annual loan charges. In addition, industry professionals also recommended that certain investment-specific fees be disclosed, including redemption fees or sales charges—fees that may be imposed by the provider as a result of changing investments in a given period, surrender charges—fees that may be imposed as a result of selling or withdrawing money from the investment within a given number of years after investing, and wrap fees—fees that are assessed on the total assets in a participant’s account. Some industry professionals recommended that plan participants be provided information on their returns net of all fees so that they can clearly see what their investments have earned after fees. Others recommended that information be disclosed that explains how the investment and administrative costs of the plan affect their investment returns and their overall retirement savings in the plan. These officials believed that such information would help participants understand that fees are an important factor to consider when directing their investments. Whether participants are provided with basic expense ratio information or more detailed information on various fees, or both, providing the information in a clear, easily comparable format can assist participants in understanding the information disclosed. In our prior reports on helping the public understand Social Security information and on more effective disclosures for credit cards, we found that certain practices help people understand complicated information. These practices include language—writing information in clear language, layout—using straightforward layout and graphics, length—providing a short document, comparability—making options easy to compare in a single document, distribution—offering a choice of paper or electronic distribution. In our prior work, we noted that Labor is considering the development of a new rule regarding the fee information required to be furnished to participants under its section 404(c) regulation. According to Labor officials, they are attempting to identify the critical information on fees that plan sponsors should disclose to participants of 404(c) plans (but not all participant-directed plans) and the best way to do so. The initiative is intended to explore what steps might be taken to ensure that participants have the information they need about their plan and available investment options, without imposing additional costs, given that such costs are likely to be charged against the individual accounts of participants and affect their retirement savings. The officials are currently considering what fee information should be provided to participants and what format would enable participants to easily compare the fees across a plan’s various investment options. Labor is also currently evaluating comments received from consumer groups, plan sponsors, service providers, and others as it develops its regulation. Labor also has ongoing efforts designed to help participants and plan sponsors understand the importance of plan fees and the effect of those fees on retirement savings. Labor has developed and makes available on its Web site a variety of educational materials specifically designed to help plan participants understand the complexities of the various fee and compensation arrangements involved in 401(k) plans. Its brochure titled A Look at 401(k) Plan Fees is targeted to participants and beneficiaries of 401(k) plans who are responsible for directing their own investments. Both 401(k) plan sponsors and participants need fee information in order to make the most informed decisions. For plan sponsors, requiring that certain information on fees be disclosed can help them understand what services they are paying for, who is benefiting, and whether their current arrangements are in the best interest of plan participants. Requiring plan sponsors to report more complete information to Labor on fees—including those paid out of plan assets by participants—would put the agency in a better position to effectively oversee 401(k) plans and, in doing so, to protect an increasing number of participants. The mere act of requiring such information may actually promote competition among the entities that provide services to plans and possibly reduce the fees service providers charge. For plan participants, given the voluminous amount of information that could be disclosed, determining the relevant information that participants most need is key. At a minimum, providing information such as expense ratios or other investment-specific fee information could be the place to start. Also, making sure that the information is accessible in terms of the language, layout, length, comparability, and distribution can ensure that participants actively utilize the information disclosed. As participants become more sophisticated or demand more information, decisions can then be made about the type and format of additional fee information. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tamara E. Cross, Assistant Director; Daniel F. Alspaugh; Monika R. Gomez; Matthew J. Saradjian; Susannah L. Compton; Craig H. Winslow; and Walter K. Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Employers are increasingly moving away from traditional pension plans to what has become the most dominant and fastest growing type of plan, the 401(k). For 401(k) plan sponsors, understanding the fees being charged helps fulfill their fiduciary responsibility to act in the best interest of plan participants. Participants should consider fees as well as the historical performance and investment risk for each plan option when investing in a 401(k) plan because fees can significantly decrease retirement savings over the course of a career. GAO's prior work found that information on 401(k) fees is limited. GAO previously made recommendations to both Congress and the Department of Labor (Labor) on ways to improve the disclosure of fee information to plan participants and sponsors and reporting of fee information by sponsors to Labor. Both Labor and Congress now have efforts under way to ensure that both participants and sponsors receive the necessary fee information to make informed decisions. These efforts on the subject have generated significant debate. This testimony provides information on 401(k) plan fees that (1) sponsors need to carry out their responsibilities to the plan and (2) plan participants need to make informed investment decisions. To complete this statement, GAO relied on previous work and additional information from Labor and industry professionals regarding information about plan fees. Information on 401(k) plan fee disclosure serves different functions for plan sponsors and participants. Plan sponsors need to understand a broad range of information on expenses associated with their plans to fulfill their fiduciary responsibilities. Sponsors need information on expenses associated with the investment options that they offer to participants and the providers they hire to perform plan services. Such information would help them meet their fiduciary duty to determine if expenses are reasonable for the services provided. In addition, sponsors also need to understand the implication of certain business arrangements between service providers, such as revenue sharing. Despite some disagreements about how much information is needed, industry professionals have made various suggestions to help plan sponsors collect meaningful information on expenses. Labor has also undertaken a number of activities related to the information on plan fees that sponsors should consider. Participants need fee information to make informed decisions about their investments--primarily, whether to contribute to the plan and how to allocate their contributions among the investment options the plan sponsor has selected. However, many participants are not aware that they pay any fees, and those who are may not know how much they are paying. Most industry professionals agree that information about an investment option's relative risk, its historic performance, and the associated fees is fundamental for plan participants. Some industry professionals also believe that other fees that are also charged to participants should be understood, so that participants can clearly see the effect these fees can have on their account balances.
Southeast Asian nations have growing populations and economies. According to a 2014 study by the Asian Development Bank Institute, the combined populations of ASEAN countries are projected to reach 700 million by 2030. This study also found that ASEAN countries’ collective nominal GDP increased by an average of 5.7 percent annually from 1992 to 2013, despite the Asian financial crisis in 1997 and 1998 and the global financial slowdown in 2008 and 2009. According to International Monetary Fund (IMF) data, if ASEAN countries were a single nation, their collective 2014 GDP would represent the seventh largest economy in the world. However, a study by the Economic Research Institute for ASEAN and East Asia found that while the average poverty level in ASEAN countries declined from about 45 percent in 1990 to about 16 percent in 2010, about 95 million people in these countries in 2010 lived in poverty. In addition, a study by the Institute of Southeast Asian Studies has estimated that ASEAN countries would need about $600 billion from 2010 to 2020 to meet their infrastructure investment needs. ASEAN countries are located astride key sea lanes between the Persian Gulf and the economic centers of East Asia. The U.S. Energy Information Administration, based on a 2011 United Nations (UN) Conference on Trade and Development Review of Maritime Transport, estimated that more than half of the world’s annual merchant fleet tonnage passed through the Straits of Malacca, Sunda, and Lombok on to the South China Sea in 2010 and that about 15 million barrels of oil passed through the Strait of Malacca between Singapore and Indonesia each day in 2013. The South China Sea also has important fishing areas and is thought to be rich in oil and natural gas reserves. Figure 1 shows the names and locations of the ASEAN countries. The leaders of Indonesia, Malaysia, the Philippines, Singapore, and Thailand founded ASEAN in 1967 to accelerate economic growth, social progress, and cultural development in the region through joint projects and cooperation. In 1976, ASEAN countries agreed to the Treaty of Amity and Cooperation in Southeast Asia, which called for peaceful resolution of disputes and mutual respect for the independence, sovereignty, equality, territorial integrity, and national identity of all nations. Also in 1976, ASEAN established the ASEAN Secretariat, an administrative body with representatives from each member nation, in Jakarta, Indonesia, to provide greater efficiency in the coordination of ASEAN organizations and more effective implementation of projects and activities. ASEAN’s membership expanded to include Brunei in 1984, Vietnam in 1995, Laos and Burma in 1997, and Cambodia in 1999. ASEAN amended the Treaty of Amity and Cooperation in 1998 to permit states outside Southeast Asia to accede to the treaty with the approval of all 10 members. China acceded to the treaty in 2003. The United States acceded to the treaty in July 2009 and the following year became the first non-ASEAN country to establish a dedicated mission to ASEAN. In 2011, the United States appointed its first resident Ambassador to ASEAN. China established a mission to ASEAN in October 2012. Economic development in the first six ASEAN members—Brunei, Indonesia, Malaysia, the Philippines, Singapore, and Thailand, known as the ASEAN 6—is generally more advanced than in the newer members— Cambodia, Laos, Burma (Myanmar), and Vietnam, known as the CLMV countries (see table 1). The business environment also varies significantly across ASEAN countries. In Transparency International’s 2014 Corruption Perceptions Index, which measures perceived levels of public sector corruption among 175 countries and territories, ASEAN countries’ rankings ranged from 7 for Singapore to 156 for Cambodia and Burma. The World Bank’s 2014 ease of doing business ranking of 189 economies ranked Singapore at 1, as having the most business-friendly regulations, and Burma at 177, the lowest ranked ASEAN country. As stated in ASEAN’s Charter, ASEAN emphasizes noninterference in the domestic matters of its members and respect for their sovereignty and territorial integrity. According to U.S. officials, as well as officials at the ASEAN missions of other countries, the primary mode of decision making in ASEAN is consensus. Further, according to an ADB study, the Secretariat does not direct ASEAN but instead plays a coordinating and facilitating role. The Chair of ASEAN rotates annually among members; Burma served as the 2014 Chair and Malaysia as the 2015 Chair. Biannual ASEAN summit meetings are used to make decisions on key issues, provide policy guidance, and set the objectives of ASEAN. In 2003, ASEAN leaders adopted a plan to create an ASEAN Community by 2015, comprising security, sociocultural, and economic communities. According to the ASEAN Economic Community Blueprint (the Blueprint), the ASEAN Economic Community will be a single market and production base that includes the free flow of goods, services, investment, capital, and skilled labor; highly competitive economic region that includes consumer protection and regional cooperation for intellectual property rights; region of equitable economic development based on inclusive growth and narrowing the development gap; and region fully integrated into the global economy that negotiates for FTAs and trade facilitation. ASEAN established a monitoring mechanism called the ASEAN Economic Community scorecard to report the progress of implementing various measures and to identify implementation gaps and challenges. According to a 2013 ADB study of progress in achieving the ASEAN Economic Community, a significant milestone of economic integration has been the substantial progress in tariff liberalization, but removal of nontariff barriers, such as import bans and subsidies, new import procedures and requirements, and technical barriers, remain as major impediments. In 2010, ASEAN adopted the Master Plan on ASEAN Connectivity, which envisioned enhancing intraregional connectivity to encourage trade, investment, tourism, people-to-people exchanges, and development. The plan identifies needed improvements to physical connectivity (e.g., roads, rail, power supply, and port facilities); institutional connectivity (e.g., mutual recognition arrangements for movement of skilled labor in the region and harmonization of rules, regulations, procedures, and standards); and people-to-people connectivity (e.g., reducing visa requirements and enhancing training opportunities and outreach). In late 2011, the President announced that the United States would rebalance its worldwide engagement to include a greater focus on the Asia-Pacific region. In April 2014, pursuant to a mandate in the Department of State, Foreign Operations and Related Programs State and USAID provided Congress with a Appropriations Act, 2014,strategy for the rebalance that states the following goals for the region: Deepen U.S. security ties and alliances in the region to, among other things, deter and defend against threats to the region and the United States and resolve disputes peacefully. Advance U.S. prosperity and inclusive economic growth in the region through the expansion of U.S. exports and investment, increased regional economic integration, and sustainable development. Strengthen partnerships with China and emerging partners to, among other things, promote trade and economic growth. Shape an effective regional architecture of robust regional institutions and multilateral agreements to strengthen regional stability and economic growth. Support sustainable development, democracy, and human rights by advancing regional commitment to democratic development and human rights and addressing health threats and climate change. In addition, other U.S. agencies have stated goals specific to the region. USAID’s Regional Development Mission for Asia seeks, for example, to increase regional institutions’ ability to promote sustainable and inclusive regional growth. The Secretary of Commerce has stated that the economic dimension of the rebalance includes deepening trade and investment ties with existing partners; working multilaterally to build both the hard and soft infrastructure necessary for growth of emerging partners; and building new mechanisms to establish a level playing field for commerce across the region, such as the proposed Trans-Pacific Partnership (TPP) FTA. TPP is currently being negotiated by the Office of the United States Trade Representative (USTR). Other agencies also work to promote U.S. economic engagement in Southeast Asia. (See app. II for more information about selected U.S. entities’ roles and responsibilities and areas of involvement in ASEAN countries.) Chinese government leaders have stated goals regarding Southeast Asia that emphasize regional connectivity as well as mutual benefit and noninterference. For example, in 2013, Chinese President Xi Jinping spoke of increasing engagement and rapport with China’s neighbors to foster China’s development while benefitting countries on its periphery. Chinese leaders also regularly refer to the Five Principles of Peaceful Coexistence, originally espoused in a 1954 agreement between China and India: mutual respect for sovereignty and territorial integrity, mutual nonaggression, noninterference, equality and mutual benefit, and peaceful coexistence. Moreover, at the 16th ASEAN-China Summit in 2013, Premier Li Keqiang proposed a framework for cooperation between China and ASEAN, known as the 2 + 7 cooperation framework, with a stated goal of deepening cooperation by focusing on economic development and expanding mutual benefit. China has also articulated policy regarding Southeast Asia in two documents. China’s 2011-2015 Five Year Plan emphasizes developing infrastructure and other connections with neighboring countries, improving the quality of Chinese exports instead of export volume, increasing China’s level of investment in other countries in mutually beneficial ways, and increasing its influence in international economic and financial institutions. A 2014 Chinese government white paper on foreign aid states that China actively promotes cooperation between developing nations while seeking mutually beneficial results and respecting other countries’ development paths. assistance to ASEAN countries has focused on narrowing development gaps within ASEAN by funding infrastructure construction, supporting agricultural development, and providing technical training. (App. II provides more information about selected Chinese agencies’ roles and responsibilities and areas of involvement in Southeast Asia.) According to the paper, China’s China has claimed sovereignty over the islands of the South China Sea and has illustrated its claims by marking a “nine dash line” on its maps that encircles most of the South China Sea and its land features, including the Paracels and Spratlys. The ASEAN countries of Vietnam, Brunei, Malaysia, Indonesia, and the Philippines have competing claims with China and with each other. China has also conducted dredging operations to create new above-water features in the South China Sea, raising tensions between China and ASEAN countries with interests in the South China Sea. Information Office of the State Council, The People’s Republic of China, China’s Foreign Aid (2014), (Beijing, July 2014). Chinese trade in goods with ASEAN countries has grown rapidly since 2001, surpassing U.S. trade in goods since 2007. Most of the goods that the United States and China trade with ASEAN countries are for industrial use. Although the United States and China are important trading partners of ASEAN countries, trade among ASEAN countries exceeds their trade with either the United States or China. Available data, though limited, indicate that the total value of U.S. trade in services with ASEAN countries is similar to the value of China’s but U.S. foreign direct investment (FDI) in ASEAN countries has exceeded China’s FDI. U.S. FDI was concentrated in four of the ASEAN 6 countries—Indonesia, Malaysia, Singapore, and Thailand—and more Chinese FDI was in the CLMV countries—Cambodia, Laos, Burma (Myanmar), and Vietnam. While Chinese and U.S. firms compete in ASEAN countries, available data indicate that U.S. firms compete more directly with firms from Europe, South Korea, and Japan. Chinese trade in goods with ASEAN countries has surpassed U.S. trade in goods and has grown as a share of China’s total trade in goods, while U.S. trade in goods with ASEAN countries has declined as a share of total U.S. trade in goods. Both U.S. and Chinese firms compete with many other countries for the ASEAN market. U.S. and Chinese trade in goods with ASEAN countries reflects these countries’ inclusion in global supply chains. In 2014, China’s total goods trade with ASEAN countries was more than double that of the United States: $480 billion for China and $220 billion for the United States. In 1994 through 2014, Chinese total trade in goods with ASEAN countries grew much more rapidly than U.S. total trade in goods with ASEAN countries. In 2007, China surpassed the United States in total goods trade with ASEAN countries, and the gap has continued to grow. Chinese imports from ASEAN countries surpassed U.S. imports from ASEAN countries in 2008. In 2014, China imported $208 billion of goods from ASEAN countries, and the United States imported $142 billion. Chinese exports to ASEAN countries surpassed U.S. exports in 2005. In 2014, China exported $272 billion of goods to ASEAN countries, and the United States exported $79 billion. After China acceded to the World Trade Organization (WTO) in 2001, Chinese goods trade increased worldwide, and at a faster rate in ASEAN countries. Chinese goods trade in ASEAN countries increased in nominal terms every year except 2009. The United States has run a trade deficit with ASEAN countries in every year from 1994 through 2014, while China had a trade deficit or slight surplus with ASEAN countries from 1994 through 2011 before running a growing surplus from 2012 through 2014. In 2014, China had a goods trade surplus of $64 billion with ASEAN countries, while the United States had a goods trade deficit of $63 billion. Figure 2 shows the growth of U.S. and Chinese trade in goods with ASEAN countries from 1994 through 2014. The relative importance of trade in goods with ASEAN countries since 1994 has increased for China but decreased for the United States. From 1994 through 2014, Chinese trade in goods with ASEAN countries rose from 6.1 percent to 11.2 percent of total Chinese trade in goods. In contrast, during the same period, U.S. trade in goods with ASEAN countries fell from 7.2 percent to 5.5 percent of total U.S. trade in goods. Most of the goods that the United States and China trade with ASEAN countries are goods for industrial use, reflecting ASEAN countries’ integration into the U.S. and Chinese global supply chains. Total trade. U.S. and Chinese trade in industrial goods (capital and intermediate goods) with ASEAN countries represented, respectively, about 62 percent and 80 percent of their total trade with ASEAN countries in 2014, down from 71 percent and 87 percent in 2007. In 2014, consumer goods represented 25 percent of the United States’ total trade with ASEAN countries and 14 percent of China’s. The remaining goods were not classified according to these categories. Imports. Goods for industrial use represented 59 percent of U.S. imports from ASEAN countries and 88 percent of Chinese imports in 2014. Among industrial goods, microchips were the top U.S. and Chinese import from ASEAN countries. Consumer goods represented 35 percent of U.S. imports from ASEAN countries and 7 percent of Chinese imports in 2014. Exports. Goods for industrial use represented 67 percent of U.S. exports to ASEAN countries and 74 percent of Chinese exports to ASEAN countries in 2014. Among industrial goods, microchips were the top export to ASEAN countries from both the United States and China. Consumer goods represented 8 percent of U.S. exports to ASEAN countries and 20 percent of Chinese exports in 2014. Figure 3 shows U.S. and Chinese trade in goods with ASEAN countries by use in 2014. For more information about the composition of goods trade by ASEAN countries with the United States and China by type, see appendix III. ASEAN countries trade more with each other than with other trading partners. China is the largest outside trading partner of ASEAN countries, followed by the European Union (EU), Japan, and the United States. Exports. In 2013, ASEAN countries exported $330 billion in goods to other ASEAN countries, $115 billion in goods to the United States, and $153 billion in goods to China. The United States is the fifth largest market for ASEAN countries’ goods exports, behind other ASEAN countries, China, the EU, and Japan. From 2003 through 2013, the U.S. share of ASEAN exports fell from 15.4 percent to 9.1 percent, while China’s share of ASEAN exports increased from 6.4 percent to 12.2 percent. Imports. In 2013, ASEAN countries imported $278 billion in goods from other ASEAN countries, $92 billion from the United States, and $198 billion from China. The United States is the fifth largest source of ASEAN goods imports, behind other ASEAN countries, China, the EU, and Japan. From 2003 through 2013, the United States’ share of ASEAN imports fell from 13.0 percent to 7.6 percent, while China’s share of ASEAN imports increased from 8.2 percent to 16.2 percent. Figure 4 shows ASEAN countries’ exports and imports of goods, by trading partner, in 2003, 2008, and 2013. In 2011 through 2013, 7 of the 10 ASEAN countries exported more goods to China than to the United States: only Cambodia, the Philippines, and Vietnam exported more goods to the United States (see fig. 5). However, while most individual ASEAN countries traded more goods with China than with the United States, they exported the majority of their goods to many other countries. In 2011 through 2013, 9 of the 10 ASEAN countries imported more goods from China than from the United States. Brunei was the only exception, importing slightly more goods from the United States (see fig. 6). Individual ASEAN countries imported goods from a diverse set of trading partners. The United States’ role relative to China’s in ASEAN countries’ goods and services trade may be greater when the amount of intermediate inputs to the traded goods and services is taken into account. For example, because of the nature of global supply chains, a consumer phone from a U.S. company may be assembled in China but incorporate components from Germany, Japan, South Korea, and other countries. Although components of a country’s exports may originate in other countries, export data from the United Nations Commodity Trade database count the full value of the export for only the exporting country. Data from the Organisation of Economic Co-operation and Development (OECD) and the WTO attempt to account for the value added to a finished export by each contributing country. Data from the United Nations, WTO, and the International Trade Centre, as well as our estimates, showed that ASEAN countries imported more in total goods and services from China in 2009 than from the United States. However, OECD-WTO’s data show that ASEAN countries imported $41 billion in value-added goods and services from China in 2009 and $52 billion from the United States. This suggests that Chinese exports contained a higher portion of components produced elsewhere than did U.S. exports. Similarly, some components of the goods and services that ASEAN countries exported to the United States and China were produced outside ASEAN countries. Data from the United Nations, WTO, and the International Trade Centre, as well as our estimates, showed that ASEAN countries exported more in total goods and services from China in 2009 than from the United States. However, according to OECD-WTO data, ASEAN countries exported $86 billion in value-added goods and services to the United States in 2009 and $47 billion to China. Although our analysis of U.S. and Chinese trade in services with ASEAN countries represents broad estimates rather than precise values, these data indicate that the United States and China traded approximately the same total value of services in 2011. Our calculations, based on data from the U.S. Bureau of Economic Analysis (BEA) and other sources, indicate that the U.S. trade in services with ASEAN countries totaled approximately $37 billion in 2011. According to UN, WTO, and International Trade Centre estimates of Chinese trade in services for 2011, China’s trade in services with ASEAN countries also totaled approximately $37 billion. In 2011, the United States exported more services to ASEAN countries than it imported from them, and China imported more services from ASEAN countries than it exported to them. U.S. and Chinese imports. We calculated that the United States imported approximately $14 billion in services from ASEAN countries in 2011 and approximately $16 billion in 2012. In 2012, the top categories for U.S. service imports from ASEAN countries were (1) business, professional, and technical services (approximately $6 billion) and (2) travel and passenger fares (approximately $5.7 billion). Estimates from the UN, the WTO, and the International Trade Centre on Chinese trade in services for 2011 indicated that China imported approximately $23 billion in services from ASEAN countries. China does not publish data on its service imports from ASEAN countries by category of service. U.S. and Chinese exports. We calculated that the United States exported approximately $23 billion in services to ASEAN countries in 2011 and approximately $25 billion in 2012. In 2012, the top categories for U.S. service exports to ASEAN countries, totaling approximately $15 billion, were (1) business, professional, and technical services and (2) royalties and license fees. Estimates from the UN, the WTO, and the International Trade Centre on Chinese trade in services for 2011 indicated that China exported approximately $13 billion in services to ASEAN countries. China does not publish data on service exports to ASEAN countries by category of service. Both U.S. and Chinese trade in services with ASEAN countries are small in value compared with their goods trade. In 2011, total U.S.-ASEAN services trade was 19 percent of the value of U.S.-ASEAN goods trade, while the estimated total China-ASEAN services trade was 10 percent of the value of China-ASEAN goods trade. Data on FDI in ASEAN countries from the United States and China have limitations, in that U.S. and Chinese FDI data may not accurately reflect the countries to which U.S. and Chinese FDI ultimately flows. However, available data show that from 2007 through 2012, U.S. FDI flows to ASEAN countries totaled about $96 billion, exceeding China’s reported FDI of about $23 billion. However, annual Chinese FDI flows increased each year during this period, from $1 billion in 2007 to $6 billion in 2012 in nominal terms (see fig. 7). According to BEA, U.S. FDI in ASEAN countries in 2003 through 2013 was concentrated in holding companies, which accounted for about half of total U.S. FDI. Manufacturing, especially computer and electronic products manufacturing, was the second largest category of U.S. FDI. From 2007 through 2012, U.S. investment was concentrated in several of the ASEAN 6 countries whereas a larger share of Chinese investment was in the CLMV countries (see fig. 8). Almost all U.S. FDI flows were to four of the ASEAN 6 countries—Indonesia, Malaysia, Singapore, and Thailand. U.S. FDI flows exceeded China’s FDI flows in these countries. U.S. FDI flows in the four ASEAN 6 countries represented 99 percent of all U.S. FDI flows to ASEAN countries during this period. However, Chinese FDI flows exceeded U.S. FDI flows for the four CLMV countries. Chinese FDI flows to these four countries totaled $7.8 billion for 2007 through 2012, whereas U.S. FDI flows to those countries totaled around $0.5 billion. China’s FDI in CLMV countries represented 35 percent of Chinese FDI in ASEAN countries in this time period. For both the United States and China, the largest FDI flows were to Singapore. Singapore is a regional financial hub; therefore, according to BEA, a portion of FDI in Singapore is likely to have been reinvested in other countries, which may include other ASEAN countries. Data on competition between U.S. and Chinese firms in ASEAN countries are limited but indicate that the United States competes more often with firms from Europe, South Korea, and Japan than with Chinese firms. In addition, U.S. firms tend to obtain World Bank and ADB contracts in different sectors than Chinese firms. From 2001 through 2014, U.S. exports of goods to ASEAN countries were more similar to Japanese and EU exports than to Chinese exports, suggesting that U.S. firms are more likely to compete directly with Japanese and EU firms than with Chinese firms for exports to ASEAN countries. To assess the extent of the similarity of exports, we calculated a commonly used index to compare U.S., Chinese, and other countries’ exports to ASEAN countries. From 2001 through 2014, U.S. exports to ASEAN countries have consistently been more similar to EU and Japanese exports than to Chinese exports (see fig. 9). However, during this period, Chinese exports to ASEAN countries have grown more similar to U.S. exports, while Japanese exports have grown less similar to U.S. exports. This is consistent with the pattern for Chinese exports globally. According to an IMF study, China has traditionally competed with other Asian countries, and although large differences remain, China’s exports are becoming more similar to those of advanced economies, such as Germany and the United States. China’s export similarity index with the United States grew from 0.248 in 1995 to 0.333 in 2008, according to the IMF study. We identified three data sources that provide some information on individual contracts competed for, or obtained by, U.S. and Chinese firms. These data indicate that in ASEAN countries, U.S. firms compete more often with firms from countries other than China and tend to be awarded contracts in different sectors. We analyzed data for contracts funded by the World Bank and ADB as well as data from Commerce’s Advocacy Center on host-government contracts. The World Bank and ADB track the awardees of their contracts, as well as contract size and sector. The Advocacy Center tracks contract competitors and awardees for the U.S. firms that apply for and receive its support, as well as the size and sector of the contract. Although these data represent a small share of activity in the region, they provide insights into the degree of competition between U.S. and Chinese firms for the projects represented. From 2000 through 2014, both U.S. and Chinese firms were awarded hundreds of World Bank-financed contracts in ASEAN countries, but they tended to obtain contracts in different sectors (see fig. 10). Excluding contracts that went to domestic firms, our analysis of World Bank data showed that Chinese firms were awarded a higher dollar value of World Bank contracts in ASEAN countries ($781 million) than firms from any other country.value of World Bank contracts that Chinese firms were awarded in ASEAN countries, and contracts for consulting services accounted for less than 1 percent. In contrast, U.S. firms did not obtain any World Bank contracts for civil works in ASEAN countries, and contracts for consulting services accounted for about 78 percent of the value of World Bank contracts obtained by U.S. firms. Contracts for goods accounted for about Civil works projects accounted for about 73 percent of the 22 percent of the value of the contracts that U.S. firms obtained in ASEAN countries and about 26 percent of the value of contracts obtained by Chinese firms. ADB has also predominantly awarded contracts to U.S. and Chinese firms in different sectors in ASEAN countries (see fig. 11). Similar to World Bank contracts, most ADB contracts in 2013 and 2014 went to domestic firms in the project country. However, U.S. firms received the largest amount of contract value ($329 million) awarded to foreign firms and Chinese firms received the second largest ($308 million). Nearly all ADB contract value awarded to U.S. firms was for management of emergency assistance to typhoon-affected areas of the Philippines. In contrast, Chinese firms received 84 percent of their contract value for construction. Of Chinese construction contracts, the largest share, $242 million, was for road transportation projects in Vietnam, Cambodia, and Laos. Chinese firms received 16 percent of their contract value ($50 million) for goods to be used in the electricity and renewable energy sectors, such as transformers, wires, and hydraulic equipment. U.S. firms received one contract for a renewable energy construction project, a $9 million contract for a geothermal plant in Indonesia. U.S. firms that received support from Commerce’s Advocacy Center in fiscal years 2009 through 2014 competed less often with Chinese firms than with firms from other countries. The Commerce data cover those public sector contracts competed for by U.S. firms in ASEAN countries for which the Advocacy Center received an application by a U.S. firm for commercial advocacy. Chinese firms competed for 30 out of these 172 contracts (see table 2). The value of the contracts for which Chinese firms competed was $6.8 billion—6 percent of the $112 billion in total contract value for which U.S. firms were competing and less than the total value competed for by nine other countries’ firms. U.S. firms that applied for Advocacy Center support competed against firms from only China or from China and other developing countries in only five cases. U.S. firms were most likely to compete with Chinese firms in the telecommunications sector, where U.S. and Chinese firms competed for 5 of 8 contracts; and the energy and power sector, where U.S. and Chinese firms competed for 7 of 19 contracts. To further economic engagement with ASEAN countries, the United States and China each have entered into existing trade agreements and are parties to ongoing negotiations. The two countries also support their domestic firms by providing export financing and other services. The United States supports regional economic development and integration as part of its trade capacity building (TCB) assistance to strengthen institutions and governance. China supports regional economic development and integration through capacity building and has provided billions of dollars for infrastructure development. China has also promised additional billions of dollars for future infrastructure construction in the region, including through the creation of the new multinational Asian Infrastructure Investment Bank, headquartered in Beijing. The United States has an FTA with Singapore, while China has free trade and investment agreements with all 10 ASEAN countries as well as a separate FTA with Singapore. The United States is party to the ongoing TPP negotiations, which include 4 ASEAN countries. China is party to the Regional Comprehensive Economic Partnership negotiations, which include all ASEAN countries. U.S.-Singapore FTA. The January 2004 U.S.-Singapore FTA eliminated tariffs for U.S. exports to Singapore and phased out tariffs for Singapore’s exports to the United States over a 10-year period. As a result of the U.S.-Singapore FTA, goods from the United States and Singapore no longer face any tariffs in each other’s markets. For example, Singapore faces no tariff on its exports to the United States of a type of medicine, a top Singapore export in 2014, for which other U.S. trading partners with normal trade relations face a tariff of 6.5 percent. In addition to eliminating tariffs, the U.S.-Singapore FTA provided greater access for U.S. service providers and addressed trade issues, such as strengthening Singapore’s intellectual property rights protection, government procurement, protection of the environment, and protection of labor rights. According to USTR, the U.S. goods trade surplus with Singapore was $14.1 billion in 2014; and the U.S. services trade surplus with Singapore was $5.8 billion in 2013, the latest data available. China-ASEAN Framework Agreement on Comprehensive Economic Cooperation. China’s framework agreement with the ASEAN countries comprises a series of trade and investment agreements focused on expanding access to each other’s markets. From 2004 through 2009, China and the ASEAN countries signed three agreements: The China-ASEAN Trade in Goods Agreement entered into force in July 2005. The agreement separates goods into different groups, each with different timelines for tariff reduction. For example, under the agreement the parties committed to reduce tariffs to zero for most goods traded between the ASEAN 6 and China by 2012; CLMV countries agreed to reduce most tariffs to zero by 2018. The parties also agreed to reduce tariffs for goods categorized by a country as sensitive or highly sensitive for its economy to no more than 5 percent by 2018 for ASEAN 6 countries and by 2020 for CLMV countries. CLMV countries may also designate more goods as sensitive or highly sensitive than China and the ASEAN 6 countries. According to the WTO, the average Chinese tariff on imports from ASEAN countries in 2013 was 0.7 percent (0.8 percent for Laos and Cambodia), compared with an average of 9.4 percent for all of China’s trading partners. For example, according to WTO data, ASEAN countries face no tariff on a type of rubber, a key export from ASEAN countries to China in 2014, for which other Chinese trading partners with normal trade relations face a tariff of 8 percent. Similarly, for example, according to WTO data, China faces a 10 percent tariff on women’s cotton jackets and blazers, a top Chinese export to Vietnam in 2014, for which other Vietnamese trading partners with normal trade relations face a 20 percent tariff. The China-ASEAN Trade in Services Agreement entered into force in July 2007. The agreement provides market access for participant countries’ companies and requires that firms located in participant countries be given treatment equal to domestic service providers in agreed-upon sectors. All countries signing the agreement agree to the specific service sectors to which the agreement applies in each country. The agreement permits CLMV countries to open fewer sectors and liberalize fewer types of transactions. The China-ASEAN Investment Agreement entered into force in February 2010. Under the agreement, China and ASEAN countries commit to treat each other’s investors as equal to domestic investors and investors from other countries with which China and ASEAN countries have signed investment agreements. The agreement also included a provision on how disputes between the investor and the invested country are to be settled. In August 2014, China and ASEAN announced discussions to upgrade these agreements. The second round of discussions, held in February 2015, focused on investment, economic cooperation, and other areas. China-Singapore FTA. The China-Singapore FTA, which entered into force in 2009, included tariff reductions for goods beyond those covered under the China-ASEAN Trade in Goods Agreement. All of China’s exports to Singapore, and almost all of Singapore’s exports to China, enter the respective countries tariff free. As of 2014, Singapore generally did not apply any tariffs on any imports, including those from the United States and China. According to Singapore’s Ministry of Trade and Industry, the FTA also included provisions to expand access for Singapore’s and China’s service providers beyond each country’s WTO commitments for certain sectors such as business and hospital services. Unlike the U.S.-Singapore FTA, China’s FTAs with ASEAN and Singapore do not address issues such as protection of intellectual property rights and labor rights. For example, the China-ASEAN FTA does not address protection of the environment and labor rights and only reaffirms each country’s commitments to WTO provisions on the protection of intellectual property rights. According to USTR, China’s existing FTA covers only three areas—goods, services, and investment— while the U.S.-Singapore FTA has 21 chapters covering a wide range of areas, including intellectual property rights, government procurement, environment, and labor rights. In addition, according to USTR, China’s FTA is significantly less ambitious in the areas of services and investment than the U.S.-Singapore FTA. USTR expects that, although negotiations are ongoing, TPP will be a more ambitious and comprehensive agreement than the proposed Regional Comprehensive Economic Partnership (RCEP). The United States and China are actively engaged in ongoing negotiations for TPP and RCEP, respectively. Several countries in the Asia-Pacific region, including ASEAN countries, are parties to negotiations for both agreements (see fig. 12). Trans-Pacific Partnership (TPP) Agreement Negotiations As of August 2015, the United States Trade Representative is engaged in TPP negotiations with 11 other Asia-Pacific region countries, including 4 ASEAN countries—Brunei, Malaysia, Singapore, and Vietnam. According to our analysis of World Bank and UN data, in 2013, the 12 Asia-Pacific countries negotiating TPP had a combined population of approximately 800 million people; had a combined GDP of almost $28 trillion, about 37 percent of global GDP; and covered about 26 percent of world goods The four ASEAN countries that are engaged in TPP negotiations trade.accounted for 58 percent of U.S. trade with ASEAN countries. Launched in 2002, with the United States joining in 2009, TPP has had several rounds of negotiation, the most recent in July 2015. Although TPP’s text is not finalized, in 2011 negotiators agreed that it would address, for example, ensuring a competitive business environment; providing TCB in developing countries; improving customs procedures; addressing impediments to e-commerce; creating clear rules for addressing disputes; and protecting the environment, labor rights, and intellectual property rights, among other issues. USTR is seeking to finalize TPP in 2015. Regional Comprehensive Economic Partnership Agreement Negotiations China, the 10 ASEAN countries, and five other countries are currently negotiating RCEP to expand trade and investment access. In 2011, ASEAN proposed establishing RCEP to broaden and deepen existing FTAs between the ASEAN countries and six others—Australia, China, India, Japan, New Zealand, and South Korea. According to our analysis of World Bank and UN data, RCEP negotiating partners have a combined population of more than 3.4 billion people, have a combined GDP of more than $21 trillion—more than 28 percent of global GDP—and account for about 29 percent of world goods trade.spoke with, RCEP will not greatly expand the six existing ASEAN agreements but will synthesize their provisions in a single comprehensive agreement. RCEP negotiation working groups include those for trade in goods, trade in services, investment, intellectual property, competition, and economic and technical cooperation. The eighth round of RCEP negotiations was held in Kyoto, Japan, in June 2015. Details of RCEP, like those of TPP, are not finalized, but the negotiating parties have stated that they hope to complete the agreement in 2015. According to U.S. officials we Existing FTA Relationships between FTA Negotiating Partners China and the United States each have existing FTAs with a number of their negotiating partners in the proposed regional FTAs. The United States has existing FTAs with 6 of its 11 TPP negotiating partners (see fig. 13). Of the 66 possible FTA pairings among the 12 TPP participants, 42 FTAs are currently in place. China has FTAs with ASEAN and New Zealand and, in June 2015, signed FTAs with Australia and South Korea. The Australia and South Korea FTAs have not entered into force (see fig. 14). Counting ASEAN as a single negotiating partner, there are 21 possible FTA pairings among RCEP participants, 12 of which are currently in place. Both TPP and RCEP include major trading partners with which China and the United States do not currently have FTAs. According to our analysis of UN and BEA data, TPP negotiating partners with which the United States does not have an existing FTA represented approximately 7 percent of both U.S. goods trade and U.S. services trade in 2013 (see table 3). In 2013, bilateral goods trade between the United States and Japan, the largest U.S. trading partner engaged in TPP negotiations without a U.S. FTA, represented 5 percent of total U.S. goods trade and 7 percent of U.S. services trade. The six TPP negotiating partners with which the United States has an existing FTA constituted 33 percent of U.S. goods trade in 2013 and 16 percent of U.S. services trade. According to our analysis of data from the UN, the WTO, and the International Trade Centre, Chinese trade with India and Japan—the two countries in RCEP with which China has not negotiated an FTA— represented 9 percent of total Chinese goods trade in 2013 and more than 8 percent of Chinese services trade in 2011 (see table 4). Chinese trade with ASEAN, Australia, New Zealand, and South Korea represented 21 percent of total Chinese goods trade in 2013 and more than 18 percent of Chinese services trade in 2011. In 2014, leaders of economies that belong to the Asia-Pacific Economic Cooperation (APEC) forum, which includes seven ASEAN economies, the United States, and China, among others, agreed to undertake a study of issues related to the realization of a Free Trade Area of the Asia-Pacific (FTAAP).a statement issued at APEC’s 2014 meeting, FTAAP is not viewed as an alternative to TPP and RCEP but will build on current and developing The study is to be completed by the end of 2016. According to regional architectures. APEC identified TPP and RCEP as possible steps toward eventual realization of FTAAP. The United States and China provide support and financing to their firms that trade and invest in ASEAN countries. U.S. agencies provide financing and maintain overseas personnel to promote U.S. policies and support U.S. firms. While country-specific data on Chinese financing are unavailable, the Chinese government provides significantly greater financing than the United States worldwide and has taken steps to support Chinese investment in ASEAN countries. According to our analysis of U.S. agency data, from fiscal years 2009 through 2014, the United States provided more than $6 billion in financing to support U.S. exports to, and investment in, ASEAN countries (see table 5). During that period: The U.S. Export-Import Bank (Ex-Im) authorized about $5.4 billion in loans, loan guarantees, and insurance to support U.S. exports to ASEAN countries. Worldwide, Ex-Im authorizations were $27.3 billion in 2013 and $20.5 billion in 2014. The Overseas Private Investment Corporation (OPIC), the United States’ development finance institution, committed about $664 million in financing to support U.S. investment projects in ASEAN countries. OPIC supports U.S. investment projects in overseas countries by providing U.S. private sector investors with direct loans, loan guarantees, political risk insurance, and support for private equity investment funds. they offer loan terms more generous than those the arrangement specifies. According to our analysis of Ex-Im data, in fiscal years 2009 through 2014, Ex-Im’s authorizations in ASEAN countries largely consisted of loan guarantees and were concentrated in Indonesia and Singapore. For example, about half of the $2.1 billion that Ex-Im authorized to support U.S. exporters in ASEAN countries in fiscal year 2013 was for a loan guarantee to a U.S. firm exporting commercial aircraft to Indonesia. According to our analysis of OPIC data, OPIC’s two largest individual commitments in ASEAN countries from 2009 through 2014, each for $250 million, were for investment guarantees in fiscal year 2013 for a research center, medical school, and teaching hospital in Malaysia and in fiscal year 2011 for construction and development of solar power projects in Thailand. U.S. Ex-Im estimates and data from China’s export credit agencies indicate that China provides significantly more financial support to its exporters worldwide than does the United States. In 2014, Ex-Im estimated in an annual report to Congress that China provided $111 billion worldwide in official export support in calendar year 2013, far more Ex-Im’s report noted that than Ex-Im’s $15 billion in calendar year 2013. Chinese export credit agencies—along with those of Japan and South Korea—have multiple advantages, including greater funding capacity, the ability to lend in dollars at competitive rates, and lending programs that are not bound by OECD agreements. China is not a participant in the OECD Arrangement on Officially Supported Export Credits. The Ex-Im report also expressed concern that Chinese concessional loans provided to other governments as development assistance—including some loans with terms likely outside the range allowed by OECD agreements—may affect the competitiveness of U.S. exports. Three Chinese state-owned institutions offer various types of financing to support Chinese firms engaged in international business, including business in ASEAN countries. These three institutions do not publish data by country on their financing for exports, imports, and investment by private and state-owned enterprises. Export-Import Bank of China (China Ex-Im). China Ex-Im provides support for the import and export of goods and services, including Chinese companies’ overseas construction and investment projects. China Ex-Im is also the conduit for China’s official concessionary lending to developing countries. No data are publicly available on China Ex-Im financing for specific countries in Southeast Asia. According to its 2014 annual report, China Ex-Im provided a total of $70 billion in export and import credits worldwide that year. China Development Bank. The China Development Bank supports state-backed projects, such as airports, railways, and bridges. Although the bank does not publish country-specific data on its overseas lending, it reported that of its net loan balance of $1.24 trillion for 2014, 12.7 percent ($157 billion) was provided to recipients outside mainland China. The bank did not specify whether those recipients included foreign governments, Chinese companies operating overseas, or both. China Export & Credit Insurance Corporation (Sinosure). These numbers are based on the December 31, 2014, exchange rate of 6.205 Chinese yuan per U.S. dollar. report published by the OECD, Sinosure insured almost 15 percent of China’s exports in 2013. Multiple U.S. entities, such as the U.S. Departments of State (State), Commerce, and Agriculture (USDA), provide export promotion services and other support to help U.S. firms enter ASEAN markets or expand their presence in ASEAN countries. For example: State. State maintains economic officers in each of the 10 ASEAN countries. State supports U.S. export promotion efforts by engaging with foreign governments on policies that affect U.S. economic and commercial interests and by supporting other U.S. agencies’ export promotion efforts, among other things. Commerce. Commerce maintains a presence in seven ASEAN countries and a regional office in Singapore. Commerce provides export promotion services to U.S. firms, including advocacy and commercial diplomacy, market intelligence, matchmaking with local firms, trade counseling, and trade promotion programs. Commerce also leads or supports trade missions. From 2009 through 2014, Commerce led 11 trade missions to ASEAN countries covering a range of industries, such as aerospace, education, energy, and textiles. USDA. USDA maintains a presence in seven ASEAN countries. USDA provides export promotion services for U.S. agricultural exporters, such as market intelligence and international trade missions. USDA also offers multiple market development programs in partnership with U.S. food and agriculture industry groups. For information about State, Commerce, and USDA staffing in ASEAN countries, see appendix IV. The Chinese government also pursues agreements with other countries to facilitate trade and investment by Chinese firms in other countries, including ASEAN countries. For example: Special economic zones. China’s Ministry of Commerce has worked with some ASEAN countries to set up special economic cooperation zones to facilitate cross-border investment and trade. According to China’s Ministry of Commerce, the Chinese government supports Chinese firms that establish and invest in the zones by offering financing, and facilitating movement of materials, equipment, labor, and foreign exchange between China and the zones. China also negotiates with the host government in the areas of tax, land, and labor policies to support firms that choose to invest in the zones. According to Chinese embassy websites, as of 2012, Chinese firms had set up five zones in four countries—Cambodia, Thailand, Vietnam, and Indonesia—91 enterprises had established businesses, and more than $930 million had been invested in the zones. Currency swaps. China also facilitates cross-border trade in local currencies with ASEAN countries. Chinese agencies have publicly reported that China has currency swap agreements with the central banks of Indonesia, Malaysia, Thailand, and Singapore totaling 650 billion Chinese yuan. Currency exchanges help to facilitate trade and investment between the countries by eliminating the cost of converting to a third currency, ensuring that sufficient amounts of foreign currency are available for transactions, and reducing the risk of exchange rate fluctuation. These agreements encourage trade between China and the countries involved in the agreements to be settled in those countries’ currencies rather than in dollars. Chinese Premier Li Keqiang, the second-highest ranked Chinese Communist party official, has stated that China also plans a pilot program to allow currency swaps for cross-border transactions with other countries of the Greater Mekong Subregion (Vietnam, Laos, Cambodia, and Burma). The United States and China both provide assistance to ASEAN countries to support regional economic development and integration. U.S. initiatives have included enhancing governance and regional connectivity by, for example, efforts to improve customs procedures across ASEAN countries. Chinese initiatives have focused on infrastructure development. China has promised billions of dollars for infrastructure investment through new funds and multilateral institutions, such as the Asian Infrastructure Investment Bank. In fiscal years 2009 through 2013, the United States identified $536 million of its assistance to ASEAN countries and the ASEAN Secretariat as TCB assistance—that is, development assistance intended to improve a country’s ability to benefit from international trade. U.S. TCB assistance has supported initiatives aimed at, among other things, helping ASEAN countries draft laws and regulations, improve public financial management, train government officials, meet WTO commitments, and increase accountability and transparency. U.S. TCB assistance has supported multiple initiatives to advance ASEAN’s goal of increased connectivity and integration throughout the region. For example: ASEAN Connectivity through Trade and Investment. This 5-year, $16.2 million USAID program, begun in 2013, seeks to facilitate trade through improving standards and systems, boosting the capacity of small and medium-sized enterprises, accelerating the deployment of clean energy technologies, and expanding connectivity. One of the program’s objectives is to provide support for the ASEAN Single Window, which will integrate ASEAN’s 10 national single customs windows to enable electronic exchange of data to expedite cargo clearance and lower the cost of doing business. According to USAID officials, four ASEAN countries were ready to use the system as of January 2015, and it is planned to be operational by the end of the year. ASEAN-U.S. Partnership for Good Governance, Equitable and Sustainable Development and Security. This 5-year, $14 million program supported by USAID and State, also begun in 2013, seeks to support ASEAN integration by harmonizing approaches to the rule of law across countries; supporting people-to-people links through, for example, fellowships; collaboration on disaster response; and enhancing the ASEAN Secretariat’s management capabilities, including information technology and public outreach capacities. U.S.-ASEAN Connectivity Cooperation Initiative. Launched in 2011, this U.S. Trade and Development Agency (USTDA) initiative seeks to support ASEAN integration by leveraging private sector resources and expertise to support activities that increase connectivity and investment in the energy, transportation, and information and communications technology sectors. For example, USTDA has led reverse trade missions and workshops to increase U.S. trade and investment in electric smart grids, rail development, and other infrastructure areas in ASEAN countries. In September 2013, USTDA sponsored the ASEAN Connectivity through Rail Workshop in Indonesia that highlighted U.S. firms’ capabilities in operation and maintenance of rail systems. In addition, USTDA is sponsoring the Global Procurement Initiative with the goal of fostering procurement systems that will make awards based on the best value offered, rather than on the lowest cost. MCC is a U.S. government corporation that seeks to reduce global poverty through economic growth. The Indonesia compact’s Green Prosperity Project is designed to increase productivity in rural areas and reduce reliance on fossil fuels by expanding renewable energy, and to increase productivity and reduce greenhouse gas emissions by improving land use practices and management of natural resources. MCC had expended $1.2 million of its $333 million commitment to Indonesia. For more information about U.S. TCB in ASEAN countries, see appendix V. Like the United States, China has supported capacity-building efforts in ASEAN countries, but it also provides billions of dollars for infrastructure construction. According to the Chinese government’s July 2014 white paper on foreign aid China’s capacity-building efforts in ASEAN countries since 2010 have included setting up experimental crop stations, building three agricultural technology demonstration centers, dispatching 300 agricultural experts to provide technical guidance, and helping to establish systems for animal and plant disease prevention and control. The paper states that China has also provided training to more than 5,000 officials and technicians from ASEAN countries in fields such as business promotion, culture and arts, Chinese language, finance, energy, and agriculture. China has also contributed to regional development and integration through infrastructure construction, generally in the form of loans for specific projects, many of which are carried out by Chinese firms. The 2014 white paper states that China appropriated $14.4 billion for global foreign assistance from 2010 to 2012, 64 percent of which was interest- free or concessional loans. The white paper also indicates that China emphasized assistance in infrastructure construction, with 45 percent of China’s total aid for economic infrastructure and 28 percent for social and public infrastructure. The white paper did not break out information on foreign aid in ASEAN countries. the U.S.-China Economic and Security Review Commission in May 2015, as a result of overcapacity in the domestic Chinese construction market, projects overseas have become more attractive to Chinese state-owned enterprises. See app. VI for information on U.S. and Chinese official development assistance to ASEAN countries. China has also provided loans to neighboring countries to finance transportation links that will facilitate trade and other exchanges. Some of these projects are part of the Greater Mekong Subregion (GMS) Economic Cooperation Program, supported by ADB. Burma, Cambodia, Laos, Thailand, Vietnam, and China’s Yunnan Province and Guangxi Zhuang Autonomous Region are members of the subregion. According to Chinese government publications and other sources, China funded construction of part of a highway in Laos on a route between Kunming and Bangkok, has upgraded its own highways that connect to other GMS countries, has built other roads financed by ADB, and has financed and built bridges in the subregion. The Chinese and Burmese governments recently completed construction of crude oil and natural gas pipelines from an Indian Ocean port in Burma to China. China also recently signed a memorandum of understanding with Thailand to build a railway in Thailand from the Thai-Laos border to Bangkok and the southeastern province of Rayong and is negotiating with Laos to build a railway connecting China with Laos’ capital of Vientiane and the Thai border. China has promised billions of dollars for new funds and multilateral institutions for the purpose of investing in infrastructure, including in ASEAN countries. For example: Silk Road Fund. China announced the creation of the $40 billion Silk Road Fund to finance infrastructure construction and other development in support of two initiatives announced by Chinese President Xi Jinping in 2013: the Silk Road Economic Belt and 21st Century Maritime Silk Road. According to a document released by the Chinese government in March 2015, these initiatives aim to improve land and maritime cooperation and connectivity along routes between China and the rest of Asia, the South China Sea, the Indian Ocean, Africa, and Europe. In February 2015, the Chinese central bank announced that an initial $10 billion had been contributed to the fund by state-owned financial institutions and the Chinese foreign exchange reserves.investment, in support of a hydropower project in Pakistan. In April 2015, China announced the fund’s first Asian Infrastructure Investment Bank (AIIB). In 2013, Chinese President Xi Jinping proposed the creation of an international institution, AIIB, to finance infrastructure projects throughout the Asia- Pacific region. Under the bank’s initial agreement, the bank’s authorized capital is $100 billion. According to AIIB documents, 57 countries are prospective founding members of the bank, including each of the 10 ASEAN countries, and the bank anticipates beginning The bank will be headquartered operations before the end of 2015.in Beijing. Chinese officials have said that all countries are welcome to join the bank; the United States and Japan have so far declined to do so. U.S. Treasury officials have stated that the United States welcomes the creation of new development institutions but have also expressed concerns about the governance and standards of the new bank. Other funds. In addition, in November 2014, Chinese Premier Li Keqiang pledged $20 billion in loans to boost infrastructure connectivity in Southeast Asia, including $10 billion in loans to ASEAN countries. He also announced that China would raise another $3 billion for the China-ASEAN Investment Cooperation Fund, a dollar- denominated equity fund that targets investment opportunities in infrastructure, energy, and natural resources in ASEAN countries. As of June 2015, the fund reported that its current size was $1 billion, and it had set a target to ultimately raise $10 billion. We are not making recommendations in this report. We sent a draft of this report for review and comment to the Departments of Agriculture, Commerce, State, and the Treasury and to MCC, OPIC, USAID, Ex-Im, USTDA, and USTR. We received technical comments from Agriculture, Commerce, State, the Treasury, and USTR, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, State, and the Treasury; the Chairman of Ex-Im; the Administrator of USAID; the United States Trade Representative; the Director of USTDA; the Chief Executive Officers of OPIC and MCC; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VII. We were asked to review the nature of the United States’ and China’s economic engagement in Southeast Asia. Our objectives were to examine (1) what available data indicate about U.S. and Chinese trade and investment with the Association of Southeast Asia Nations (ASEAN) countries and (2) what actions the U.S. and Chinese governments have taken to further economic engagement with these countries. As part of this review, we conducted fieldwork in Jakarta, Indonesia— where the ASEAN Secretariat is located—and Hanoi and Ho Chi Minh City, Vietnam. We based our selection of these two countries, among the 10 ASEAN countries, on the amounts of U.S. and Chinese exports and imports, foreign direct investment (FDI), and development assistance in each country. We also considered whether a country participated in U.S. and Chinese trade agreements or was a negotiating partner in the Trans- Pacific Partnership, whether a country was the location of any regional institutions, whether it was an emerging partner, and whether it was a South China Sea claimant. The information on foreign law in this report is not a product of GAO’s original analysis but is derived from interviews and secondary sources. To describe U.S. and Chinese engagement with ASEAN countries, we analyzed data on U.S. and Chinese trade in goods, trade in services, and FDI. We also analyzed trade and contract data to determine the extent to which U.S. and Chinese firms compete. To assess the reliability of these data, where possible, we cross-checked the data with other sources, conducted checks on the data for internal consistency, and consulted with U.S. officials. Because of the limited availability of data and the context for different sets of data we report, the time period for each set of reported data varied. We determined that the data were sufficiently reliable for the purposes of our report and have noted caveats where appropriate to indicate limitations in the data. To obtain data on U.S. and Chinese trade in goods from 1994 through 2014, we accessed the United Nations’ (UN) Commodity Trade Statistics Database by means of the Department of Commerce’s (Commerce) Trade Policy Information System. This database provides data for comparable categories of exports and imports of goods for the United States and China. In reporting the value of exports, we used data on total exports, which include re-exports— goods that are imported but then exported in substantially the same condition. China does not report export data to the UN Commodity Trade database that separates re-exports from total exports. Therefore, we used data on total exports to ensure the comparability of U.S. and Chinese data on goods exports. For imports, we used data on general imports, which include goods that clear customs and goods that enter into bonded warehouses or foreign trade zones. We determined that data on trade in goods for the United States and China were generally reliable for comparing trends over time and the composition of trade. To categorize the U.S. and Chinese trade in goods into capital, intermediate, and consumer goods, we assigned each good from the UN Commodity Trade database to one of these three categories using the UN’s Broad Economic Categories. For goods that the UN does not classify as capital, intermediate, and consumer goods, we created an unclassified category. For example, the UN does not classify passenger motor cars as capital or consumer goods. We analyzed data from the Organisation for Economic Co-operation and Development (OECD) and the World Trade Organization (WTO) on trade in value-added goods and services to illustrate the importance of accounting for components of a country’s exports that originate in other countries. We analyzed data from the ASEANstats database for 2003, 2008, and 2013 to examine ASEAN countries’ trade in goods with their trading partners over time. Because some of the ASEAN countries’ trading partners do not report data to the UN Commodity Trade database, we used data from the ASEANstats database as a comprehensive set of data on trade in goods for all of ASEAN countries’ trading partners. We compared trade data from the ASEANstats database and the UN Commodity Trade Database and found some differences in values of bilateral trade between ASEAN countries and their trading partners. Reasons for the differences include differences in the valuation of goods and differences in data quality. We calculated U.S. trade in services for 2011 and 2012, based on tabulations prepared for us by the Commerce’s Bureau of Economic Analysis (BEA) and other sources, including the U.S. Census Bureau. BEA's data on trade in services for several categories—travel and passenger fares, transportation, education, and other private services—are based on data from various sources. According to BEA, its survey data are from mandatory surveys of primarily U.S. businesses with services trade that exceeds certain thresholds. BEA does not survey a random sample of U.S. businesses and therefore does not report the data with margins of error. Our estimates of U.S. trade in services represent broad estimates rather than precise values. We extrapolated values for certain services at the country level from broader data (e.g., travel service data are based on multiplying the number of travelers for a country by data on average expenditures for travelers and average passenger fees for the region) We calculated values for other services (e.g., business, professional, and technical services) from a range of estimates based on survey data. In instances where the volume of trade for a service was presented to us as a range, we used the midpoint value to estimate the volume of trade for that service. In instances where the volume of trade for a service was presented as a range and described by BEA as trending upward, we used the lowest value for the earlier years and the highest value for the later years and assumed that the growth was linear. For China’s trade in services in 2011, we used estimates from the UN Conference on Trade and Development, the World Trade Organization (WTO), and the International Trade Centre downloaded from the International Trade Centre’s Trade Map database. The estimates for 2009 from the Trade Map database are the same as data on China’s trade in services from a report by China’s Ministry of Commerce. For data on U.S. firms’ investments from 2007 through 2012, we used data that we obtained directly from BEA. For Chinese firms’ investments, we used data from the UN Conference on Trade and Development as reported by China’s Ministry of Commerce. To identify patterns in, and to compare, U.S. and Chinese FDI, we report U.S. and Chinese data on FDI while noting their limitations. First, as we have previously reported, both U.S. and Chinese FDI may be underreported, and experts have expressed particular concern regarding China’s data. U.S. and Chinese firms set up subsidiaries in places such as the Netherlands and the British Virgin Islands, which can be used to make investments that are not captured by U.S. and Chinese data on FDI. Experts state that this could be a significant source of underreporting for China’s data. For U.S. data, according to BEA, U.S. data on FDI are based on quarterly, annual, and benchmark surveys. BEA’s benchmark survey is the most comprehensive survey of such investment and covers the universe of U.S. FDI. According to BEA, quarterly and annual surveys cover samples of businesses with FDI that exceed certain thresholds. BEA does not survey a random sample of businesses and therefore does not report the data with margins of error; therefore, we have not reported margins of error. Second, China does not report its definition of FDI when reporting its data. However, the types of data included by China in its FDI (e.g., equity investment data and reinvested earnings data) appear similar to data reported for U.S. FDI, which the United States defines on the basis of the OECD definition of FDI. Despite these limitations, various reports, including those published by international organizations such as the IMF, government agencies, academic experts, and other research institutions, use China’s reported investment data to describe China’s FDI activities. In addition, despite some potential underreporting of FDI data, we determined that the U.S. FDI data were reliable for reporting general patterns, when limitations are noted. Given challenges in determining appropriate deflators for some data, we used nominal rather than inflation-adjusted values for U.S. and Chinese trade and for investments in ASEAN countries. However, we did test to see what the impact would be of deflating these data and found that deflating these values made a limited difference in describing the overall trends. For example, if the goods trade values that we report were adjusted using the U.S. gross domestic product (GDP) deflator, total Chinese trade in goods would surpass total U.S. trade in goods in 2007— similar to trends we found in nominal trade values. U.S. total trade in goods increased by a factor of 2.4 from 1994 through 2013 if not adjusted for inflation and by a factor of 1.7 if adjusted for inflation. Over the same period, Chinese total trade in goods increased by a factor of 30.9 if not adjusted for inflation and by a factor of 21.4 if adjusted for inflation. To determine the extent to which U.S. and Chinese firms compete in ASEAN countries, we interviewed U.S. agency and private sector representatives and analyzed available data. To assess the extent to which exporters from the United States, China, and other countries compete, we calculated an export similarity index to compare U.S., Chinese, and other countries’ exports to ASEAN countries from 2001 through 2014. The export similarity index is a measure of the similarity of exports from two countries to a third country. For example, to calculate the index for U.S. and Chinese exports to ASEAN countries, for each type of good that the United States and China exports, we first calculate the share of that good in the United States’ and China’s total exports to ASEAN countries. We then identify the minimum of the United States’ and China’s shares. The index is the sum of the minimum shares for all types of goods that the United States and China export to ASEAN countries. We used data on goods exports from the UN Commodity Trade database at the four-digit level and calculated each country’s export of a particular good as a share of that country’s total exports to ASEAN countries. We also analyzed data from Commerce’s Advocacy Center on host- government contracts and data for contracts funded by the Asian Development Bank (ADB) and World Bank. Although these data represent a small share of activity in the region, they provide insights into the degree of competition between U.S. and Chinese firms for the projects represented. Commerce’s Advocacy Center data were for a limited number of cases (184) where U.S. firms requested the agency’s assistance in bidding for host-government contracts in ASEAN countries in 2009 through 2014. Because these data included the nationality of other firms bidding on a host-government contract, we used this information to determine the extent to which Chinese firms or firms of other nations were competing with U.S. firms for these contracts. We counted the numbers of contracts and summed the value of contracts in the Advocacy Center data for which each foreign country’s firms competed against U.S. firms. We excluded 12 contracts for which the nationality of competitors was not identified, and in cases where the U.S. firm(s) competed against a consortium of firms from different countries, we counted the whole value of the contract in each country’s total. We also used the Advocacy Center’s classification of contracts by sector to determine the sectors in which Chinese firms competed for the highest proportion of contracts. To determine the reliability of these data, we manually checked the data for missing values and reviewed information about how the data were collected. In addition, we interviewed Advocacy Center staff about the data. Advocacy Center staff told us that data from before 2010 may be less complete, because the center switched databases at that time, and some contracts that had been closed may not have been transferred. Overall, we found the data to be reliable for reporting on competition between U.S. and other firms, including Chinese firms, in ASEAN countries. The World Bank publishes data on the value, sector, and suppliers of its contracts in ASEAN countries. We used the World Bank’s classification of contracts into procurement categories (goods, civil works, consultant services, and nonconsultant services) to compare the value and types of contracts that U.S. and Chinese firms were awarded from 2001 through 2014. However, we combined the consultant services and nonconsultant services categories into one category that we titled “consultant and other services”. The data include contracts (generally large-value) that World Bank staff reviewed before the contracts were awarded. We analyzed all contracts in individual ASEAN countries as well as Mekong and ASEAN regional contracts. To determine the reliability of these data, we electronically checked the data for missing values and possible errors. We also contacted World Bank personnel to determine how the data were collected and any limitations of the data. We found that the data for contracts funded by the World Bank were generally reliable for the purpose of demonstrating U.S. and Chinese competition in ASEAN countries over time. To compare the value and types of contracts obtained by U.S. and Chinese firms, we used ADB’s published data on the value, sector, and recipient of its contracts for consulting services, goods, and civil works provided as technical assistance or funded by loans and grants to ASEAN countries in 2013 and 2014. We also included regional contracts for Southeast Asia or the Greater Mekong Subregion in our analysis. ADB publishes data only for consulting contracts over $0.1 million in value and other contracts over $1.0 million, so our analysis of ADB contracts does not include some smaller ADB contracts. In addition, a portion of the ADB data did not have the contracts classified according to the nature of the contract (construction, consulting services, goods, turnkey, and others). Therefore, we classified contracts obtained by U.S. and Chinese firms that were missing these categories according to those used in the rest of the data. To determine the reliability of these data, we checked the data for missing values and other types of discrepancies. We found that the ADB data were generally reliable for our purpose of reporting on U.S. and Chinese competition in ASEAN countries in 2013 and 2014. To examine the actions that the U.S. and Chinese governments have taken to further economic engagement, we reviewed regional and country studies and U.S., Chinese, and ASEAN agency documents and interviewed U.S. and third-country officials, officials from private sector business associations, and experts from think tanks. We tried to arrange visits with Chinese government officials in the ASEAN countries we visited and in Washington, D.C.; however, they were unable to accommodate our requests for a meeting. U.S. agencies included in the scope of our study are the U.S. Departments of Agriculture (USDA), Commerce, State (State), and the Treasury; the Office of the U.S. Trade Representative (USTR); the Millennium Challenge Corporation; the U.S. Agency for International Development (USAID); the Export-Import Bank of the United States (Ex-Im); the Overseas Private Investment Corporation (OPIC); and the U.S. Trade and Development Agency. To obtain information about U.S. and Chinese trade agreements with ASEAN countries, we reviewed the trade agreements; U.S., Chinese, and ASEAN documents; academic and government studies; prior GAO reports; and documents from multilateral organizations, such as the WTO. We also interviewed U.S. officials in Indonesia and Vietnam, officials from private sector business associations, and experts from think tanks. To calculate examples of tariff reductions from these trade agreements, we used data on trade in goods from the UN Commodity Trade database to identify top traded goods and data from the WTO and the U.S. International Trade Commission on U.S., Chinese, and ASEAN countries’ tariffs. To calculate the percentage of world goods trade for the participants in the Trans-Pacific Partnership negotiations, the Regional Comprehensive Economic Partnership negotiations, and the North American Free Trade Agreement, we used data on trade in goods from the UN Commodity Trade Database. As of July 2015, some countries such as Malaysia and Italy had not reported data on trade in goods for 2013, so we used the average of those countries’ available data from 2010 through 2012 as an estimate of its 2013 trade in goods.so we excluded it from the calculations. To calculate the total population and percentage of world GDP for these participants, we used data on population and GDP from the World Bank’s World Development Indicators. In addition, Laos did not report data for 2010 through 2013, To obtain information about U.S. financing, we compiled Ex-Im and OPIC data from these agencies’ annual reports and congressional budget justifications and interviewed agency officials to provide additional context and to clarify elements of the data. Where relevant, we note that additional Ex-Im insurance may include ASEAN countries but do not include these in our totals. To determine the reliability of these data, we interviewed agency officials and checked their published annual reports against agency-provided summary data to determine any limitations of the data or discrepancies in the data. We determined that data from Ex-Im and OPIC were generally reliable to present trends and aggregate amounts by year. To document U.S. efforts to provide export promotion services in ASEAN countries, we reviewed State’s Foreign Affairs Manual and information about Commerce and USDA’s export promotion policies and trade missions. We also interviewed State, Commerce, and USDA officials in Washington, D.C., and in Vietnam and Indonesia. We obtained data from State, Commerce, and USDA on the agencies’ staffing in ASEAN countries. To determine the reliability of the data, we obtained information about how the data were collected and tabulated. We determined that the data were sufficiently reliable to show staffing trends over time (see app. IV). To document Chinese financing and support for firms, we used publicly available information from the websites of the Export-Import Bank of China, the China Development Bank, and the China Export & Credit Insurance Corporation. We converted financing numbers from Chinese yuan to U.S. dollars using exchange rates reported by the Federal Reserve. We supplemented this information with estimates from U.S. Ex-Im on Chinese export finance in 2013. We also used information reported by China’s Ministry of Commerce about Chinese investment in special economic zones and from Xinhua, China’s state press agency, about currency swap agreements between China and other countries’ central banks. To obtain data on U.S. official development assistance, we used data from the OECD’s Development Assistance Committee. To obtain data on China’s grants and aid, we used data from China’s 2014 white paper on foreign aid, which describes China’s foreign assistance activities from 2010 through 2012. To determine the reliability of U.S. development assistance data, we interviewed a knowledgeable USAID official about the definitions and collection of the data. We determined that U.S. development assistance data were generally reliable for showing the trends and composition of aid to ASEAN countries over time. China’s white paper does not break out the data that it provides by country; thus, we were unable to provide data on China’s provision of aid to ASEAN countries. To document U.S. support for economic development and integration in ASEAN countries, we used the USAID trade capacity building (TCB) database to capture U.S. development assistance efforts related to trade in ASEAN countries and at the ASEAN Secretariat. USAID collects data to identify and quantify the U.S. government’s TCB activities in developing countries through an annual survey of agencies on behalf of USTR. We also reviewed agency project summaries and interviewed agency officials in Washington, D.C., Indonesia, and Vietnam. Where relevant, we noted that funds provided in a larger region (for example, funds provided to the East Asia and Pacific region) may include ASEAN countries, but we did not include these regional funds in our totals. To determine the reliability of these data, we interviewed agency officials regarding their methods for compiling and reviewing the data. We determined that data from the TCB database were sufficiently reliable for our purposes. To describe China’s support for regional integration, we assessed publicly available information from China’s July 2014 white paper on foreign aid; from Chinese ministries, such as the Ministry of Foreign Affairs; and from Xinhua, China’s state news agency. We determined that the white paper and web publications from the Ministry of Foreign Affairs represented official Chinese statements. We relied on Xinhua for translation of statements by Chinese officials about Chinese initiatives and strategies and for other factual information, such as the location of infrastructure built in ASEAN countries with Chinese support. We conducted this performance audit from April 2014 to August 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Multiple U.S. and Chinese entities have roles in managing and conducting economic engagement with member countries of the Association of Southeast Asian Nations (ASEAN). Tables 6 and 7 provide more information about the roles and responsibilities of 10 U.S. entities and 6 Chinese entities that seek to promote trade and investment in, and provide aid to, ASEAN countries. Six Chinese entities manage China’s economic relationship with ASEAN countries. (See table 7.) Machinery is the largest component of both the United States’ and China’s total goods exports and imports with the Association of Southeast Asian Nations (ASEAN) countries, but machinery’s share of goods trade is falling. From 2007 through 2014, approximately 95 percent of U.S. and Chinese traded machinery was for industrial use, to produce other goods. For example, integrated circuits, a top machinery good, are used for producing computers and other electronic goods. The significant role of trade in machinery indicates that ASEAN countries are integrated into the U.S. and Chinese supply chains. Imports. From 2000 through 2014, U.S. imports of machinery from ASEAN countries declined from 61 percent to 40 percent of U.S. imports from ASEAN countries. As of 2014, the next largest categories of U.S. imports from ASEAN countries consisted of “other” (23 percent); textiles (15 percent); chemicals, plastic, and rubber (11 percent); and animals, plants, and food (10 percent). Machinery as a percentage of Chinese imports from ASEAN countries ranged from a low of 41 percent in 2000 to a high of more than 60 percent in 2006 before falling to 43 percent in 2014. Other leading Chinese imports in 2014 were chemicals, plastic, and rubber (16 percent) and mineral products (15 percent). Exports. From 2000 through 2014, machinery declined from 64 percent to 34 percent of U.S. exports to ASEAN countries, as the export share of U.S. transportation products grew from a low of 5 percent in 2000 to a high of 15 percent in 2013. In 2014, the other large categories of U.S. exports to ASEAN countries were “other” (17 percent); animals, plants, and food (14 percent); chemicals, plastic, and rubber (12 percent); and mineral products (6 percent). Machinery as a percentage of Chinese exports to ASEAN countries fell from a high of 46 percent in 2004 to 33 percent in 2014. Other leading Chinese exports in 2014 were “other” (30 percent); metals (13 percent); chemicals, plastic, and rubber (10 percent); and transportation (5 percent). Figure 15 shows the United States’ and China’s total goods imports from ASEAN countries, by type of goods, in 2000 through 2014. Figure 16 shows the United States’ and China’s total goods exports to ASEAN countries, by type of goods, from 2000 through 2014. Tables 8 through 11 show the top 10 exports by the United States, China, the European Union (EU), and Japan to ASEAN countries in 2014. In 2014, electronic integrated circuits and microassemblies was the top export to ASEAN countries for both the United States and China; otherwise, U.S. and Chinese top 10 exports to ASEAN countries did not overlap. Electronic integrated circuits and microassemblies was also the only top 10 U.S. export to ASEAN that was among the top 10 EU exports to ASEAN. Three of the top 10 U.S. exports to ASEAN: (1) electronic integrated circuits and microassemblies; (2) diodes, transistors and similar semiconductor devices and photosensitive semiconductor devices, light emitting diodes; and (3) gold, nonmonetary (excluding gold ores and concentrates) were among the top 10 Japanese exports. Overseas staffing in the member countries of the Association of Southeast Asian Nations (ASEAN) by the Departments of State (State) and Commerce (Commerce) generally increased in recent years, while the Department of Agriculture’s (USDA) overseas staffing remained relatively constant. State’s economic foreign service officer (FSO) positions in ASEAN countries increased from 2008 through 2014, with the largest increase in positions from 2011 through 2012. However, State attributed the increase in positions in 2011 and 2012 to a worldwide reclassification of some positions from generalist interfunctional to economic. State economic FSO positions in ASEAN countries as a percentage of economic FSO positions worldwide remained from 6 to 7 percent from 2008 to 2014 (see table 12). Commerce’s FSO positions in ASEAN countries increased in fiscal years 2012 through 2014, while its locally employed staff (LES) positions remained about the same (see table 13). The Department of Agriculture’s (USDA) FSO and LES positions in ASEAN countries did not change significantly in fiscal years 2009 through 2014. USDA positions in ASEAN countries as a percentage of its global presence gradually increased for FSO positions during these years but decreased somewhat for LES positions from fiscal years 2009 through 2014 (see table 14). U.S. agencies have identified certain official development assistance to member countries of the Association of Southeast Asian Nations (ASEAN) and the ASEAN Secretariat as trade capacity building (TCB) assistance. TCB assistance addresses areas including the regulatory environment for business, trade, and investment; constraints such as low capacity for production and entrepreneurship; and inadequate physical infrastructure, for example, poor transport and storage facilities. Table 15 shows U.S. TCB assistance to ASEAN countries, including the ASEAN Secretariat, in fiscal years 2009 through 2013. As table 15 shows, in fiscal years 2009 through 2011, U.S. TCB assistance to ASEAN countries and the ASEAN Secretariat remained relatively constant at approximately $45 million annually. In fiscal year 2012, the Millennium Challenge Corporation (MCC) signed a $600 million compact with Indonesia. MCC categorized its Indonesia compact’s $332.5 million Green Prosperity Project as TCB assistance, resulting in an increase to $365.5 million in identified TCB assistance that year. In fiscal year 2013, U.S. TCB assistance fell to approximately $34 million. Indonesia was the largest recipient of TCB assistance among ASEAN countries in recent years. Indonesia received almost 70 percent of the TCB funding in fiscal years 2009 to 2013, primarily because of funding from the MCC compact. The Philippines, Vietnam, and Cambodia were the next largest recipients of TCB assistance, with the Philippines receiving about 9 percent of total TCB assistance to ASEAN countries and the Secretariat in fiscal years 2009 through 2013, followed by Vietnam with 6 percent and Cambodia with about 5 percent. Singapore and Brunei did not receive TCB assistance during this period. U.S. official development assistance (ODA) to member countries of the Association of Southeast Asian Nations (ASEAN) has increased in recent years due to Millennium Challenge Corporation (MCC) commitments and has focused on social infrastructure and services. China does not report ODA by country and does not use the definitions used by the United States and other members of the Organisation for Economic Co-operation and Development’s Development Assistance Committee (OECD-DAC). In calendar years 2005 through 2013, the United States provided approximately $7.2 billion in ODA to ASEAN countries, approximately 2.8 percent of U.S. ODA worldwide for that period. In calendar year 2013, U.S. ODA to ASEAN countries was 4.5 percent of total U.S. ODA. More than half of U.S.-provided ODA to ASEAN countries was for “social infrastructure and services,” which includes categories such as education, health, and assistance to government and civil society. In accordance with the 2005 Paris Declaration on Aid Effectiveness, the U.S. government generally does not condition its aid on, or tie it to, the recipient country’s use of the aid to procure goods or services from the United States. From 2005 through 2013, the three highest U.S. ODA commitment levels to ASEAN countries resulted from large one-time commitments. In 2005, the United States committed $374 million in humanitarian aid to Indonesia. The 2011 and 2013 peaks reflect the entry-into-force of MCC compacts with the Philippines and Indonesia, respectively (see fig. 17). MCC’s 5-year $434 million Philippines compact consists of rehabilitation of a 222-kilometer road on Samar to improve access to markets and social services ($214.4 million); funding for projects selected by communities, such as water systems, clinics, and schools ($120 million); efforts to improve tax administration ($54.3 million); and $45.1 million for program administration, monitoring, and evaluation. MCC’s 5-year, $600 million Indonesia compact consists of the Green Prosperity Project ($332.5 million) to provide technical and financial assistance for locally identified projects in renewable energy and natural resource management; the Community-Based Health and Nutrition to Reduce Stunting Project ($131.5 million) to improve child health and nutrition; and the Procurement Modernization Project ($50 million) to increase institutional capacity and employee knowledge of good procurement practices; and $86 million for program administration, monitoring, and Supported by the large commitments by MCC, Indonesia evaluation.and the Philippines, the first and second-largest ASEAN countries by population, received the largest percentages of U.S. ODA to ASEAN countries in 2005 through 2013. Vietnam, the third largest, received the third-largest percentage (see table 16). China is not a member of the OECD-DAC and does not provide data according to OECD-DAC definitions and categories. In recent years, however, China has published some information about its foreign assistance. In 2011 and again in July 2014, China released a white paper on its foreign aid. The white paper stated that China appropriated a total of $14.4 billion for global foreign assistance as grants (36 percent of the total), interest-free loans (8 percent), and concessional loans (56 percent) in 2010 through 2012. The white paper also stated that 31 percent ($4.4 billion) of China’s aid was provided to Asia, but did not break out this aid by country. According to the white paper, 45 percent of China’s total aid was for economic infrastructure, 28 percent was for social and public infrastructure, and from 2010 to 2012 China emphasized assistance in infrastructure construction. Unlike OECD-DAC countries, China has not agreed to eliminate tying aid to the use of its own goods and services. In addition to the contact named above, Emil Friberg (Assistant Director), Charles Culverwell, Fang He, Kira Self, Michael Simon, and Eddie W. Uyekawa made key contributions to this report. Benjamin A. Bolitzer, Lynn A. Cothern, Mark B. Dowling, Justin Fisher, Michael E. Hoffman, Reid Lowe, J. Daniel Paulk, and Oziel A. Trevino provided technical assistance.
Both the United States and China seek to deepen their economic engagement with the 10 ASEAN members: Brunei Darussalam, Burma, Cambodia, Indonesia, Laos, Malaysia, the Philippines, Singapore, Thailand, and Vietnam. ASEAN countries are seeking to further integrate their economies and create an economic community by the end of 2015. According to International Monetary Fund data, if ASEAN countries were a single nation, their collective 2014 GDP would represent the seventh largest economy in the world. In 2011, the President announced a renewed focus—known as the rebalance—on the Asia-Pacific region. The U.S. Department of State and U.S. Agency for International Development prepared a 5-year strategy for the rebalance. GAO was asked to examine the United States' and China's economic engagement in the region. This report examines (1) what available data indicate about U.S. and Chinese trade and investment with ASEAN countries and (2) what actions the U.S. and Chinese governments have taken to further economic engagement with these countries. GAO analyzed publicly available economic data and Chinese government documents and reviewed documentation from 10 U.S. agencies. GAO also interviewed U.S. officials and private sector representatives. Technical comments on a draft of this report from several agencies were incorporated by GAO where appropriate. GAO is not making any recommendations in this report. China has surpassed the United States in goods trade with Association of Southeast Asian Nations (ASEAN) countries and trades a similar amount of services, but U.S. investment exceeds reported Chinese investment. China surpassed the United States in goods trade with ASEAN countries in 2007. In 2014, China's total goods trade of $480 billion was more than twice the U.S. total goods trade of $220 billion. Although China is their largest outside trading partner, ASEAN countries trade more with each other. Limited available data indicate that in 2011, the United States and China each traded about $37 billion in services with ASEAN countries. From 2007 through 2012, U.S. foreign direct investment flows to ASEAN countries of $96 billion exceeded China's reported $23 billion. The United States and China are furthering economic engagement with ASEAN countries in several ways. Trade agreements. The United States has a free trade agreement (FTA) with one ASEAN country, Singapore, while China has an FTA with all 10 ASEAN countries. The United States and China are each party to separate regional trade agreement negotiations—the United States through the Trans-Pacific Partnership and China through the Regional Comprehensive Economic Partnership. China's existing FTAs do not address aspects of trade addressed in the U.S.-Singapore FTA, such as intellectual property, the environment, and labor rights. Support for firms. From 2009 through 2014, U.S. agencies provided approximately $6 billion in financing for U.S. firms in ASEAN countries. China reports billions of dollars more in financing than the United States worldwide, but data on China's financing in Southeast Asia are unavailable. Support for regional integration. In fiscal years 2009 through 2013, U.S. agencies provided $536 million in trade capacity building assistance to ASEAN countries. China has promised tens of billions of dollars for infrastructure development through new funds and multilateral institutions like the Asian Infrastructure Investment Bank, expected to begin operations in 2015.
Countries provide food aid through either in-kind donations or cash donations. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations to purchase food in local, regional, or global markets. U.S. food aid programs are all in-kind, and no cash donations are allowed under current legislation. However, the administration has recently proposed legislation to allow up to 25 percent of appropriated food aid funds to purchase commodities in locations closer to where they are needed. Other food aid donors have also recently moved from providing primarily in-kind aid to more or all cash donations for local procurement. Despite ongoing debates as to which form of assistance are more effective and efficient, the largest international food aid organization, the United Nations (UN) World Food Program (WFP), continues to accept both. The United States is both the largest overall and in-kind provider of food aid to WFP, supplying about 43 percent of WFP’s total contributions in 2006 and 70 percent of WFP’s in-kind contributions in 2005. Other major donors of in-kind food aid in 2005 included China, the Republic of Korea, Japan, and Canada. In fiscal year 2006, the United States delivered food aid through its largest program to over 50 countries, with about 80 percent of its funding allocations for in-kind food donations going to Africa, 12 percent to Asia and the Near East, 7 percent to Latin America, and 1 percent to Eurasia. Of the 80 percent of the food aid funding going to Africa, 30 percent went to Sudan, 27 percent to the Horn of Africa, 18 percent to southern Africa, 14 percent to West Africa, and 11 percent to Central Africa. Over the last several years, funding for nonemergency U.S. food aid programs has declined. For example, in fiscal year 2001, the United States directed approximately $1.2 billion of funding for international food aid programs to nonemergencies. In contrast, in fiscal year 2006, the United States directed approximately $698 million for international food aid programs to nonemergencies. U.S. food aid is funded under four program authorities and delivered through six programs administered by USAID and USDA; these programs serve a range of objectives, including humanitarian goals, economic assistance, foreign policy, market development, and international trade. (For a summary of the six programs, see app. I.) The largest program, P.L. 480 Title II, is managed by USAID and represents approximately 74 percent of total in-kind food aid allocations over the past 4 years, mostly to fund emergency programs. The Bill Emerson Humanitarian Trust, a reserve of up to 4 million metric tons of grain, can be used to fulfill P.L. 480 food aid commitments to meet unanticipated emergency needs in developing countries or when U.S. domestic supplies are short. U.S. food aid programs also have multiple legislative and regulatory mandates that affect their operations. One mandate that governs U.S. food aid transportation is cargo preference, which is designed to support a U.S.-flag commercial fleet for national defense purposes. Cargo preference requires that 75 percent of the gross tonnage of all government-generated cargo be transported on U.S.-flag vessels. A second transportation mandate, known as the Great Lakes Set-Aside, requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. Multiple challenges in logistics hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. While in some cases agencies have tried to expedite food aid delivery, most food aid program expenditures are for logistics, and the delivery of food from vendor to village is generally too time-consuming to be responsive in emergencies. Factors that increase logistical costs and lengthen time frames include uncertain funding processes and inadequate planning, ocean transportation contracting practices, legal requirements, and inadequate coordination in tracking and responding to food delivery problems. While U.S. agencies are pursuing initiatives to improve food aid logistics, such as prepositioning food commodities and using a new transportation bid process, their long-term cost-effectiveness has not yet been measured. In addition, the current practice of selling commodities to generate cash resources for development projects—monetization—is an inherently inefficient yet expanding use of food aid. The current practice of selling commodities as a means to generate resources for development projects—monetization—is an inherently inefficient yet expanding use of food aid. Monetization entails not only the costs of procuring, shipping, and handling food, but also the costs of marketing and selling it in recipient countries. Furthermore, the time and expertise needed to market and sell food abroad requires NGOs to divert resources from their core missions. However, the permissible use of revenues generated from this practice and the minimum level of monetization allowed by the law have expanded. The monetization rate for Title II nonemergency food aid has far exceeded the minimum requirement of 15 percent, reaching close to 70 percent in 2001 but declining to about 50 percent in 2005. Despite these inefficiencies, U.S. agencies do not collect or maintain data electronically on monetization revenues, and the lack of such data impedes the agencies’ ability to fully monitor the degree to which revenues can cover the costs related to monetization. USAID used to require that monetization revenues cover at least 80 percent of costs associated with delivering food to recipient countries, but this requirement no longer exists. Neither USDA nor USAID was able to provide us with data on the revenues generated through monetization. These agencies told us that the information should be in the results reports, which are in individual hard copies and not available in any electronic database. Various challenges to implementation, improving nutritional quality, and monitoring reduce the effectiveness of food aid programs in alleviating hunger. Since U.S. food aid assists only about 11 percent of the estimated hungry population worldwide, it is critical that donors and implementers use it effectively by ensuring that it reaches the most vulnerable populations and does not cause negative market impact. However, challenging operating environments and resource constraints limit implementation efforts in terms of developing reliable estimates of food needs and responding to crises in a timely manner with sufficient food and complementary assistance. Furthermore, some impediments to improving the nutritional quality of U.S. food aid, including lack of interagency coordination in updating food aid products and specifications, may prevent the most nutritious or appropriate food from reaching intended recipients. Despite these concerns, USAID and USDA do not sufficiently monitor food aid programs, particularly in recipient countries, as they have limited staff and competing priorities and face legal restrictions on the use of food aid resources. Some impediments to improving nutritional quality further reduce the effectiveness of food aid. Although U.S. agencies have made efforts to improve the nutritional quality of food aid, the appropriate nutritional value of the food and the readiness of U.S. agencies to address nutrition- related quality issues remain uncertain. Further, existing interagency food aid working groups have not resolved coordination problems on nutrition issues. Moreover, USAID and USDA do not have a central interagency mechanism to update food aid products and their specifications. As a result, vulnerable populations may not be receiving the most nutritious or appropriate food from the agencies, and disputes may occur when either agency attempts to update the products. Although USAID and USDA require implementing organizations to regularly monitor and report on the use of food aid, these agencies have undertaken limited field-level monitoring of food aid programs. Agency inspectors general have reported that monitoring has not been regular and systematic, that in some cases intended recipients have not received food aid, or that the number of recipients could not be verified. Our audit work also indicates that monitoring has been insufficient due to various factors including limited staff, competing priorities, and legal restrictions on the use of food aid resources. In fiscal year 2006, although USAID had some non-Title II-funded staff assigned to monitoring, it had only 23 Title II- funded USAID staff assigned to missions and regional offices in 10 countries to monitor programs costing about $1.7 billion in 55 countries. USDA administers a smaller proportion of food aid programs than USAID and its field-level monitoring of food aid programs is more limited. Without adequate monitoring from U.S. agencies, food aid programs may not effectively direct limited food aid resources to those populations most in need. As a result, agencies may not be accomplishing their goal of getting the right food to the right people at the right time. U.S. international food aid programs have helped hundreds of millions of people around the world survive and recover from crises since the Agricultural Trade Development and Assistance Act (P.L. 480) was signed into law in 1954. Nevertheless, in an environment of increasing emergencies, tight budget constraints, and rising transportation and business costs, U.S. agencies must explore ways to optimize the delivery and use of food aid. U.S. agencies have taken some measures to enhance their ability to respond to emergencies and streamline the myriad processes involved in delivering food aid. However, opportunities for further improvement remain to ensure that limited resources for U.S. food aid are not vulnerable to waste, are put to their most effective use, and reach the most vulnerable populations on a timely basis. To improve the efficiency of U.S. food aid—in terms of its amount, timeliness, and quality—we recommended in our previous report that the Administrator of USAID and the Secretaries of Agriculture and Transportation (1) improve food aid logistical planning through cost- benefit analysis of supply-management options; (2) work together and with stakeholders to modernize ocean transportation and contracting practices; (3) seek to minimize the cost impact of cargo preference regulations on food aid transportation expenditures by updating implementation and reimbursement methodologies to account for new supply practices; (4) establish a coordinated system for tracking and resolving food quality complaints; and (5) develop an information collection system to track monetization transactions. To improve the effective use of food aid, we recommended that the Administrator of USAID and the Secretary of Agriculture (1) enhance the reliability and use of needs assessments for new and existing food aid programs through better coordination among implementing organizations, make assessments a priority in informing funding decisions, and more effectively build on lessons from past targeting experiences; (2) determine ways to provide adequate nonfood resources in situations where there is sufficient evidence that such assistance will enhance the effectiveness of food aid; (3) develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards; and (4) improve monitoring of food aid programs to ensure proper management and implementation. DOT, USAID, and USDA—the three U.S. agencies to whom we directed our recommendations—have submitted written statements to congressional committees, as required by law, to report actions they have taken or begun to take to address our recommendations. In May 2007, these agencies established an interagency Executive Working Group to identify ways to respond to several of our recommendations. DOT stated that it strongly supported the transportation-related initiatives we recommended, noting that they offer the potential to help U.S. agencies achieve efficiencies and reduce ocean transportation costs while supporting the U.S. merchant fleet. USAID outlined actions it is considering, has initiated, or intends to take to address each of our nine recommendations. USDA stated that in general it found our recommendations to be helpful and cited some of its ongoing efforts to improve its food aid programs. However, USDA questioned some of our conclusions that it believed were the result of weaknesses in our methodology. For example, USDA does not agree that the current practice of monetization as a means to generate cash for development projects is an inherently inefficient use of resources. We maintain that it is an inherently inefficient use of resources because it requires food to be procured, shipped, and eventually sold, and the revenues from monetization may not recover shipping, handling, and other costs. Furthermore, U.S. agencies do not electronically collect data on monetization revenues, without which their ability to adequately monitor the degree to which revenues cover costs is impeded. We stand by our conclusions and recommendations, which are based on a rigorous and systematic review of multiple sources of evidence, including procurement and budget data, site visits, previous audits, agency studies, economic literature, and testimonial evidence collected in both structured and unstructured formats. Madam Chair and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. Should you have any questions about this testimony, please contact Thomas Melito, Director, at (202) 512-9601 or MelitoT@gao.gov. Other major contributors to this testimony were Phillip Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Kristy Kennedy, Joy Labez, Kendall Schaefer, and Mona Sehgal. The United States has principally employed six programs to deliver food aid: Public Law (P.L.) 480 Titles I, II, and III; Food for Progress; the McGovern-Dole Food for Education and Child Nutrition; and Section 416(b). Table 1 provides a summary of these food aid programs.
The United States is the largest global food aid donor, accounting for over half of all food aid supplies to alleviate hunger and support development. Since 2002, Congress has appropriated an average of $2 billion per year for U.S. food aid programs, which delivered an average of 4 million metric tons of food commodities per year. Despite growing demand for food aid, rising business and transportation costs have contributed to a 52 percent decline in average tonnage delivered between 2001 and 2006. These costs represent 65 percent of total emergency food aid, highlighting the need to maximize its efficiency and effectiveness. This testimony is based on a recent GAO report that examined some key challenges to the (1) efficiency of U.S. food aid programs and (2) effective use of U.S. food aid. Multiple challenges hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. Factors that cause inefficiencies include (1) insufficiently planned food and transportation procurement, reflecting uncertain funding processes, that increases delivery costs and time frames; (2) ocean transportation and contracting practices that create high levels of risk for ocean carriers, resulting in increased rates; (3) legal requirements that result in awarding of food aid contracts to more expensive service providers; and (4) inadequate coordination between U.S. agencies and food aid stakeholders in tracking and responding to food and delivery problems. U.S. agencies have taken some steps to address timeliness concerns. USAID has been stocking or prepositioning food domestically and abroad, and USDA has implemented a new transportation bid process, but the long-term cost effectiveness of these initiatives has not yet been measured. The current practice of using food aid to generate cash for development projects--monetization--is also inherently inefficient. Furthermore, since U.S. agencies do not collect monetization revenue data electronically, they are unable to adequately monitor the degree to which revenues cover costs. Numerous challenges limit the effective use of U.S. food aid. Factors contributing to limitations in targeting the most vulnerable populations include (1) challenging operating environments in recipient countries; (2) insufficient coordination among key stakeholders, resulting in disparate estimates of food needs; (3) difficulties in identifying vulnerable groups and causes of their food insecurity; and (4) resource constraints that adversely affect the timing and quality of assessments, as well as the quantity of food and other assistance. Furthermore, some impediments to improving the nutritional quality of U.S. food aid may reduce its benefits to recipients. Finally, U.S. agencies do not adequately monitor food aid programs due to limited staff, competing priorities, and restrictions on the use of food aid resources. As a result, these programs are vulnerable to not getting the right food to the right people at the right time.
The SSI program provides eligible aged, blind, or disabled persons with monthly cash payments to meet basic needs for food, clothing, and shelter. State Disability Determination Services determine whether SSI applicants are medically disabled, and SSA field office staff determine whether applicants meet the program’s nonmedical (age and financial) eligibility requirements. To be eligible for SSI in 2002, persons may not have income greater than $545 per month ($817 for a couple) or resources worth more than $2,000 ($3,000 for a couple). When applying for SSI, persons must report information about their income, financial resources and living arrangements that affect their eligibility. Similarly, once approved, recipients must report changes to these factors in a timely manner. To a significant extent, SSA depends on program applicants and recipients to report changes in their medical or financial circumstances that may affect eligibility. To verify this information, SSA generally uses computer matching to compare SSI payment records with similar information contained in other federal and state government agencies’ records. To determine whether recipients remain financially eligible for SSI benefits, SSA also conducts periodic redetermination reviews to verify eligibility factors such as income, resources and living arrangements. Recipients are reviewed at least every 6 years, but reviews may be more frequent if SSA determines that changes in eligibility are likely. In general, the SSI program is difficult and costly to administer because even small changes in monthly income, available resources, or living arrangements can affect benefit amounts and eligibility. Complicated policies and procedures determine how to treat various types of income, resources, and support that a recipient may receive. SSA must constantly monitor these situations to ensure benefit payments are accurate. After reviewing work spanning more than a decade, we designated SSI a high- risk program in 1997 and initiated work to document the underlying causes of long-standing problems and their impact on program integrity. In 1998, we reported on a variety of management issues related to the deterrence, detection, and recovery of SSI overpayments. Over the last several years, we also issued a number of reports and testimonies documenting SSA’s progress in addressing these issues. Over the last several years, SSA has demonstrated a stronger management commitment to SSI program integrity issues, and today SSA has a much improved capability to verify program eligibility and detect payment errors than it did several years ago. However, weaknesses remain. SSA has made limited progress toward simplifying complex program rules that contribute to payment errors and is not fully utilizing several overpayment prevention tools, such as penalties and the suspension of benefits for recipients who fail to report eligibility information as required. SSA issued a report in 1998 outlining its strategy for addressing SSI program integrity problems and submitted proposals to Congress requesting new authorities and tools to implement its strategy. The Foster Care Independence Act of 1999 gave SSA new authority to deter fraudulent or abusive actions, better detect changes in recipient income and financial resources, and improve its ability to recover overpayments. Of particular note is a provision in the act that strengthened SSA’s authority to obtain applicant resource information from banks and other financial institutions, since unreported financial resources are the second largest source of SSI overpayments. SSA also sought and received legislative authority to impose a period of benefit ineligibility ranging from 6 to 24 months for individuals who knowingly misrepresent facts. In addition to seeking and obtaining new legislative authority, SSA also began requiring its field offices to complete 99 percent of their assigned financial redetermination reviews and other cases where computer matching identified a potential overpayment situation caused by unreported wages, changes in living arrangements, or other factors. To further increase staff attention to program integrity issues, SSA also revised its work measurement system—used for estimating resource needs, gauging productivity, and justifying staffing levels—to include staff time spent developing information for referrals of potentially fraudulent cases to its Office of Inspector General (OIG). Consistent with this new emphasis, the OIG also increased the level of resources and staff devoted to investigating SSI fraud and abuse, in order to detect, and prevent, overpayments earlier in the disability determination process. The OIG reported that its investigative teams saved almost $53 million in fiscal year 2001 in improper benefit payments by providing information that led to denial of a claim or the cessation of benefits. Further, in a June 2002 SSI corrective action plan, SSA reaffirmed its commitment to taking actions to facilitate the removal of the SSI program from our high-risk list. To ensure effective implementation of this plan, SSA has assigned senior managers responsibility for overseeing additional planned initiatives, which include piloting new quality assurance systems, testing whether touchtone telephone technology can improve the reporting of wages, and using credit bureau data and public databases to better detect underreported income and unreported resources (automobiles and real property). To assist field staff in verifying the identity of recipients, SSA is also exploring the feasibility of requiring new SSI claimants to be photographed as a condition of receiving benefits. SSA has made several automation improvements over the last several years to help field managers and staff control overpayments. Last year, the agency distributed software nationwide that automatically scans multiple internal and external databases containing recipient financial and employment information and identifies potential changes in income and resources. This examination of financial data occurs automatically whenever a recipient’s Social Security number (SSN) is entered into the system. SSA also made systems enhancements to better identify newly entitled recipients with unresolved overpayments from a prior SSI coverage period. Now, the process of detecting overpayments from a prior eligibility period and updating recipient records occurs automatically. Thus, a substantial amount of outstanding overpayments that SSA might not have detected under prior processes is now subject to collection action. In fact, the monthly amount of outstanding overpayments transferred to current records has increased on average by nearly 200 percent, from $12.9 million a month in 1999 to more than $36 million per month in 2002. In addition to systems and software upgrades, SSA now uses more timely and comprehensive data to identify information that can affect SSI eligibility and benefit amounts. In accordance with our prior report recommendation, SSA obtained access to the Office of Child Support Enforcement’s National Directory of New Hires (NDNH), which is a comprehensive source of unemployment insurance and wage and new hires data for the nation. In January 2001, SSA field staff received access to NDNH for use in verifying applicant eligibility during the initial claims process. Recently, SSA also began requiring staff to use NDNH as a post- eligibility tool for verifying current recipients’ continuing eligibility. With NDNH, SSA field staff now have access to more comprehensive and timely employment and wage information essential to verifying factors affecting SSI eligibility. SSA has estimated that using NDNH will result in about $200 million in overpayment preventions and recoveries per year. SSA has also enhanced existing computer data matches to better verify continuing financial eligibility. For example, SSA now matches SSI recipient SSNs against its master earnings record semiannually. In 2001, SSA flagged over 206,000 cases for investigation of unreported earnings, a three-fold increase over 1997 levels. To better identify individuals receiving income from unemployment insurance benefits, quarterly data matches have also replaced annual matches. Accordingly, the number of unemployment insurance detections has increased from 10,400 in 1997 to 19,000 last year. Further, SSA’s ability to detect nursing home admissions, which can affect SSI benefits, has improved. SSA now conducts monthly matches with all states, and the number of overpayment detections related to nursing home admissions has increased substantially from 2,700 in 1997 to more than 75,000 in 2001. SSA’s ability to detect recipients residing in prisons has also improved. Over the past several years, SSA has established agreements with prisons that house 99 percent of the inmate population, and last year it reported suspending benefits to 54,000 prisoners. Lastly, SSA has increased the frequency with which it matches recipient SSNs against tax records and other data essential to identify any unreported interest, income, dividends, and pension income individuals may be receiving. These matching efforts have also resulted in thousands of additional overpayment detections over the last few years. To obtain more current information on the income and resources of SSI recipients, SSA has also increased its use of on-line access to various state program data, such as unemployment insurance and workers’ compensation. As a tool for verifying SSI eligibility, direct on-line connections are generally more effective than using periodic computer matches, because the information is more timely. Thus, SSA staff can quickly identify potential disqualifying income or resources at the time of application and before overpayments occur. In many instances, this allows the agency to avoid having to go through the difficult and often unsuccessful task of recovering overpaid SSI benefits. Field staff can directly query various state records to quickly identify workers compensation, unemployment insurance, or other state benefits individuals may be receiving. As of January 2002, SSA had access to 73 agencies in 42 states, as compared with 43 agencies in 26 states in 1998. Finally, to further strengthen program integrity, SSA took steps to improve its SSI financial redetermination review process. It increased the number of annual reviews from 1.8 million in fiscal year 1997 to 2.4 million in fiscal year 2001 and substantially increased the number of reviews conducted through personal contact with recipients, from 237,000 in 1997 to almost 700,000 in fiscal year 2002. SSA also refined its profiling methodology in 1998 to better target recipients that are most likely to have payment errors. SSA’s data show that estimated overpayment benefits—amounts detected and future amounts prevented—increased by $99 million over the prior year. Agency officials indicated that limited resources would affect SSA’s ability to do more reviews and still meet other agency priorities. In June 2002, SSA informed us that the Commissioner of SSA recently decided to make an additional $21 million available to increase the number of redeterminations this year. Despite its increased emphasis on overpayment detection and deterrence, SSA is not meeting its payment accuracy goals. In 1998, SSA pledged to increase its SSI overpayment accuracy rate from 93.5 percent to 96 percent by fiscal year 2002; however, the latest payment accuracy rate is 93.6 percent, and SSA does not anticipate achieving the 96 percent target until 2005. Various factors may account for SSA’s inability to achieve its SSI accuracy goals, including the fact that key initiatives that might improve SSI overpayment accuracy have only recently begun. For example, field offices started to access NDNH wage data in 2001. This could eventually help address the number one source of overpayments—unreported wages, which in fiscal year 2000 accounted for $477 million in overpayments, or about 22 percent of overpayment errors. Further, SSA’s data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. Last year, overpayments attributable to this category totaled about $394 million, or 18 percent of all overpayments detected. SSA now has enhanced authority to obtain applicant resource information from financial institutions and plans to implement a pilot program later this year. Thus, when fully implemented, this tool may also help improve the SSI payment accuracy rate. SSA has made only limited progress toward addressing excessively complex rules for assessing recipients’ living arrangements, which have been a significant and long-standing source of payment errors. SSA staff must apply a complex set of policies to document an individual’s living arrangements and the value of in-kind support and maintenance (ISM)being received, which are essential to determining benefit amounts. Details such as usable cooking and food storage facilities with separate temperature controls, availability of bathing services, and whether a shelter is publicly operated can affect benefits. These benefit determination policies depend heavily on recipients to accurately report whether they live alone or with others; the relationships involved; the extent to which rent, food, utilities, and other household expenses are shared; and exactly what portion of those expenses an individual pays. Over the life of the SSI program, these policies have become increasingly complex as a result of new legislation, court decisions, and SSA’s own efforts to achieve benefit equity for all recipients. The complexity of SSI program rules pertaining to living arrangements, ISM, and other areas of benefit determination is reflected in the program’s administrative costs. In fiscal year 2001, SSI benefit payments represented about 6 percent of benefits paid under all SSA-administered programs, but the SSI program accounted for 31 percent of the agency’s administrative expenses. Although SSA has examined various options for simplifying rules concerning living arrangements and ISM over the last several years, it has yet to take action to implement a cost-effective strategy for change. During our recent fieldwork, staff and managers continued to cite program complexity as a problem leading to payment errors, program abuse, and excessive administrative burdens. In addition, overpayments associated with living arrangements and ISM remain among the leading causes of overpayments after unreported wages and resources, respectively. SSA’s lack of progress in addressing program simplification issues may limit its overall effectiveness at reducing payment errors and achieving its long- range 96 percent payment accuracy goal. SSA’s fiscal year 2000 payment accuracy report noted that it would be difficult to achieve SSI accuracy goals without some policy simplification initiatives. In its recently issued SSI Corrective Action Plan, SSA stated that within the next several years it plans to conduct analyses of alternative program simplification options beyond those already assessed. Our work shows that administrative penalties and sanctions remain underutilized in the SSI program. Under the law, SSA may impose administrative penalties on recipients who do not file timely reports about factors or events that can lead to reductions in benefits—changes in wages, resources, living arrangements, and other support being received. Penalty amounts are $25 for a first occurrence, $50 for a second occurrence, and $100 for the third and subsequent occurrences. The penalties are meant to encourage recipients to file accurate and timely reports of information so that SSA can adjust its records to correctly pay benefits. The Foster Care Independence Act also gave SSA authority to impose benefit sanctions on persons who make representations of material facts that they knew, or should have known, were false or misleading. In such circumstances, SSA may suspend benefits for 6 months for the initial violation, 12 months for the second violation, and 24 months for subsequent violations. SSA issued interim regulations to implement these sanction provisions in July 2000. Currently, however, staff rarely use penalties to encourage recipient compliance with reporting policies. SSA data show that, over the last several years, the failure of recipients to report key information accounted for 71 to 76 percent of overpayment errors and that these errors involved about 1 million recipients annually. Based on SSA records, we estimate that at most about 3,500 recipients were penalized for reporting failures in fiscal year 2001. SSA staff we interviewed cited a number of obstacles or impediments to imposing penalties, as noted in our 1998 report, such as: (1) penalty amounts are too low to be effective; (2) imposition of penalties is too administratively burdensome; and (3) SSA management does not encourage the use of penalties. Although SSA has issued guidance to field office staff emphasizing the importance of assessing penalties, this action alone does not sufficiently address the obstacles cited by SSA staff. SSA’s administrative sanction authority also remains rarely used. SSA data indicate that, between June 2000 and February 2002, SSA field office staff referred about 3,000 SSI cases to the OIG because of concerns about fraudulent activity. In most instances, the OIG returned the referred cases to the field office because they did not meet prosecutorial requirements, such as high amounts of benefits erroneously paid. Despite the large number of cases where staff believed fraud and abuse might be occurring, as of January 2002, field staff had actually imposed sanctions in only 21 SSI cases. Our interviews with field staff identified insufficient awareness of the new sanction authority and some confusion about when to impose sanctions. In one region, for example, staff and managers told us that they often referred cases to the OIG when fraud was suspected, but that it had not occurred to them that these cases could be considered for benefit sanctions if the OIG did not pursue investigation and prosecution. In our prior work, we reported that SSA had historically placed insufficient emphasis on recovering SSI overpayments. Over the past several years, SSA has been working to implement new legislative provisions to improve the recovery of overpayments. However, a number of key initiatives are still in the early planning or implementation stages, and it is too soon to gauge what effect they will have on SSI collections. Moreover, we are also concerned that SSA’s current waiver policies and practices may be preventing the collection of millions of dollars in outstanding debt. In 1998, SSA began seizing the tax refunds from former SSI recipients with outstanding overpayments. SSA reported that this initiative has yielded $221 million in additional overpayment recoveries at the end of calendar year 2001. In 2002, SSA also began recovering SSI overpayments by reducing the Social Security retirement and disability benefits of former recipients without first obtaining their consent. SSA expects that this initiative will produce about $115 million in additional overpayment collections over the next several years. SSA also recently began reporting former recipients with outstanding debts to credit bureaus and to the Department of the Treasury. Credit bureau referrals are intended to encourage individuals to voluntarily begin repaying their outstanding debts. The referrals to Treasury will provide SSA with an opportunity to seize other federal benefit payments individuals may be receiving. While overpayment recovery practices have been strengthened, SSA has not yet implemented some key recovery initiatives that have been available to the agency for several years. Although regulations have been drafted, SSA has not yet implemented administrative wage garnishment, which was authorized in the Debt Collection Improvement Act of 1996. In addition, SSA has not implemented several provisions in the Foster Care Independence Act of 1999. These provisions allow SSA to offset federal salaries of former recipients, use collection agencies to recover overpayments, and levy interest on outstanding debt. According to SSA, draft regulations for several of these initiatives are being reviewed internally. SSA officials said that they could not estimate when these additional recovery tools will be fully operational. Our work showed that SSI overpayment waivers have increased significantly over the last decade and that current waiver policies and practices may cause SSA to unnecessarily forego millions of dollars in additional overpayment recoveries annually. Waivers are requests by current and former SSI recipients for relief from the obligation to repay SSI benefits to which they were not entitled. Under the law, SSA field staff may waive an SSI overpayment when the recipient is without fault and the collection of the overpayment either defeats the purpose of the program, is against equity and good conscience, or impedes effective and efficient administration of the program. To be deemed without fault, and thus eligible for a waiver, recipients are expected to have exercised good faith in reporting information to prevent overpayments. If SSA determines a person is without fault in causing the overpayment, it then must determine if one of the other three requirements also exists to grant a waiver. Specifically, SSA staff must determine whether denying a waiver request and recovering the overpayment would defeat the purpose of the program because the affected individual needs all of his/her current income to meet ordinary and necessary living expenses. To determine whether a waiver denial in some instances would be against equity and good conscience, SSA staff must decide if an individual incurred additional expenses in relying on the benefit, and thus requiring repayment would affect his/her economic condition. Finally, SSA may grant a waiver when recovery of an overpayment may impede the effective or efficient administration of the program—for example, when the overpayment amount is equal to or less than the average administrative cost of recovering an overpayment, which SSA currently estimates to be $500. Thus, field staff we interviewed generally automatically waive overpayments of $500 or less. In December 1993, SSA markedly increased the threshold for automatic SSI overpayment waivers from $100 to $500. Officials told us that this change was based on an internal study of administrative costs related to investigating and processing waiver requests for SSA’s Title II disability and retirement programs, but not on SSI waivers directly. They were unable to locate the study for our review and evaluation. While staff and managers had varying opinions regarding the time and administrative costs associated with denying waiver requests, they also acknowledged that numerous recent automation upgrades may be cause for reexamining the current $500 waiver threshold. Our analysis of waiver data indicated that since the automatic waiver threshold was changed, the amount of SSI overpayments waived increased 400 percent, from $32 million in fiscal year 1993 to $161 million in fiscal year 2001. This increase has significantly outpaced the growth in both the number of SSI recipients served and total annual benefits paid, which increased by 12 and 35 percent respectively during this same period. Furthermore, the ratio of waived overpayments to total SSI collections has also increased. In fiscal year 1993, SSA waived overpayments were equivalent to about 13 percent of its SSI collections. By 1995, waiver amounts more than doubled, to $66 million, and were equivalent to about 20 percent of SSI collections for that year. By fiscal year 2001, SSI waivers represented nearly 23 percent of SSI collections. While not conclusive, the data indicate that liberalization of the SSI waiver threshold may be a factor in the increase in waived overpayments. SSA has not studied the impact of the increased threshold. However, officials believe that the trend in waived SSI overpayments is more likely due to annual increases in the number of periodic reviews of recipients' medical eligibility. These reviews have resulted in an increase in benefit terminations and subsequent recipient appeals. During the appeals process, recipients have the right to request that their benefits be continued. Those who lose their appeal can then request a waiver of any overpayments that occurred during the appeal period. SSA will usually grant these requests under its current waiver policies. Another factor affecting trends in waivers may be staff application of waiver policies and procedures. Although SSA has developed guidance to assist field staff in deciding whether to deny or grant waivers, we found that field staff have considerable leeway to grant waivers based on an individual’s claim that he or she reported information to SSA that would have prevented an overpayment. In addition, waivers granted for amounts of less than $2,000 are not subject to second-party review, while another employee in the office—not necessarily a supervisor—must review those above $2,000. During our field visits, we also identified variation among staff in their understanding of how waiver decisions should be processed, including the extent to which they receive supervisory review and approval. In some offices, review was often minimal or nonexistent regardless of the waiver amount, while other offices required stricter peer or supervisory review. In 1999, SSA’s OIG reported that the complex and subjective nature of SSA’s Title II waiver process, as well as clerical errors and misapplication of policies by staff, resulted in SSA’s incorrectly waiving overpayments in 9 percent of 26,000 cases it reviewed. The report also noted that 50 percent of the waivers reviewed were unsupported and that the OIG could not make a judgment as to the appropriateness of the decision. While the OIG only examined waivers under the Title II programs and for amounts over $500, the criteria for granting SSI waivers are generally the same. Thus, we are concerned that similar problems with the application of waiver policies could be occurring in the SSI program. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have. For information regarding this testimony, please contact Robert E. Robertson, Director, or Dan Bertoni, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making contributions to this testimony include Barbara Alsip, Gerard Grant, William Staab, Vanessa Taylor, and Mark Trapani.
As the nation's largest cash assistance program for the poor, the Supplemental Security Income (SSI) program SSI provided $33 billion in benefits to 6.8 million aged, blind, and disabled persons last year. In 2001, the outstanding SSI debt and newly detected overpayments totaled $4.7 billion. To deter and detect overpayments, the agency obtained legislative authority to use additional tools to verify recipients financial eligibility for benefits, enhanced its processes for monitoring and holding staff accountable for completing assigned SSI workloads, and improved its use of automation to strengthen its overpayment detection capabilities. However, because a number of initiatives are still in the planning or early implementation stages, it is too soon to assess their ultimate impact on SSI payment accuracy. In addition to improving its overpayment deterrence and detection capabilities, SSA has made recovery of overpaid benefits a high priority.
Head Start is administered by HHS’ Administration for Children and Families (ACF). Services are provided at the local level by public and private nonprofit agencies that receive their funding directly from HHS. These agencies include public and private school systems, community action agencies, government agencies, and Indian tribes. Grantees may contract with one or more other public or private nonprofit organizations—commonly referred to as delegate agencies—in the community to run all or part of their local Head Start programs. Grantees may choose to provide center-based programs, home-based programs, or a combination of both. Once approved for funding as a result of a competitive application process, Head Start grantees do not compete for funding in succeeding years. However, they are required to submit applications for continuation awards (hereafter called awards) to support their programs beyond the initial grantee budget year. After Head Start receives its annual appropriation from the Congress, the respective HHS regional offices make awards to grantees in their administrative service areas at the beginning of each grantee’s budget year as shown in table 1. Grantees use their awards for the following purposes, among others, to: purchase or rent a facility if providing a center-based program; hire qualified teachers, aides, and support staff; coordinate or contract with Public Health agencies and local health providers to deliver medical and dental services; buy or lease vehicles to transport children to Head Start centers; purchase utilities, services, and supplies needed to operate a center and administer the program; and comply with program standards and local building and health codes that ensure quality and safety. During a grantee budget year, grantees may also receive supplemental awards for specific purposes (such as expanding enrollment) or to cover normal, though sometimes unexpected, expenses such as repairing a roof or purchasing a new heating system. In addition, grantee accounts may be adjusted as the result of a routine financial audit or Head Start regional office review of grantees’ files. These activities sometimes identify unspent funds that the grantee did not report due to an error or oversight. HHS requires grantees to get their Head Start accounts audited every 2 years, though many grantees hire accountants to perform an audit every year. As shown in figure 1, grantees, as expected, may not necessarily spend all of their award by the end of their budget year. HHS permits grantees to carry over unspent funds into the next grantee budget year to complete any program objectives that remain unmet from the previous year. HHS regional offices generally handle carryover funds in two ways: 1. Carryover balances from a previous year or years are added to an award that a grantee receives in a subsequent year. This procedure is known as “reprogramming” funds, and the amount of carryover funds added to a grantee’s award is called total obligating authority (TOA). 2. Carryover balances from a previous year or years offset or reduce the award that a grantee receives in a subsequent year. This procedure is known as “offsetting” funds, and the amount of carryover deducted from the award is called new obligating authority (NOA). New $ (TOA) New $ (NOA) The growth in Head Start funding since 1990 (see fig. 2) reflects the federal government’s commitment to expanding the number of children in the program and to ensuring program quality. Overall program funding increased from about $1.5 billion in fiscal year 1990 to about $3.5 billion in fiscal year 1995. Twice in fiscal year 1990 and once each in fiscal years 1991, 1992, and 1993, the Congress appropriated additional funding for Head Start to, among other things, increase local enrollments, strengthen the program’s social, health, and parent involvement components; improve services for disabled children; initiate and improve literacy programs; and enhance salaries, benefits, training, and technical assistance for program staff. ACF allocated these expansion funds on the basis of a formula as required by statute. Despite this dramatic growth in Head Start appropriations, HHS awarded virtually all program funding to eligible grantees. Head Start’s program obligation rates for each of these years stayed at or above 99 percent, while the total number of grantees increased from 1,321 in fiscal year 1990 to about 1,400 in fiscal year 1994. Overall program outlay rates (that is, the ratio of outlays to budget authority) during this period indicate that outlays remained stable as grantees received infusions of Head Start expansion or quality improvement funding. However, at the grantee level, this funding growth increased grantee awards and unspent balances for the grantees included in our universe during the grantee budget years we examined. We found that total grantee awards for the 1,197 Head Start grantees covered by our review increased from $1.4 billion to $2.3 billion from grantee budget years 1992 through 1994, while mean awards rose from $1.2 million to $1.9 million in these same years. (See table 2.) During grantee budget years 1992, 1993, and 1994—a period of intense growth—about two-thirds of the 1,197 grantees had unspent balances at the end of each budget year. Almost 40 percent of these 1,197 grantees had unspent balances every year. As shown in table 2, these balances totaled approximately $54 million, $101 million, and $130 million, in grantee budget years 1992, 1993, and 1994, respectively, and varied greatly by grantee. However, these unspent balances were a small part of grantees’ total awards. On the basis of our analysis, unspent balances represented from about 5 to 8 percent of the award for those grantees with unspent balances and from 4 to 6 percent of total awards for all grantees in the aggregate. (See app. II for the reported unspent balances of the 108 grantees included in our sample.) Unspent balances resulted from (1) small differences between the amount of a grantee’s annual award and its actual expenditures at the end of its grantee budget year, (2) situations that delay a grantee’s expenditure of funds or that hamper a grantee’s ability to spend funds before the year’s end, and (3) a combination of these and other reasons. We found that almost two-thirds of grantees in grantee budget year 1992 and about half in grantee budget years 1993 and 1994 had small differences between their total award approved at the beginning of a grantee budget year and the amount spent at year’s end. We considered these spending variances small if the amount of unspent funds was 5 percent or less of a grantee’s award in a given year. These small budget variances could have occurred because, for example, (1) grantees’ projected budgets—upon which grant awards are based—did not equal their actual expenditures or (2) grantees did not purchase an item or service as originally planned. For example, a grantee in Ohio had ordered two buses and playground equipment for its Head Start center. However, these items were not delivered nor paid for before the grantee’s budget year ended, resulting in an unspent balance of $84,762. We found that from 10 to 24 percent of grantees with unspent balances in grantee budget years 1992 through 1994 (1) had problems renovating or building a center, which delayed planned expenditures until subsequent years, or (2) received additional funding late in a grantee budget year, making it difficult for grantees to spend all of their funds before year’s end. For example, a Head Start grantee in Colorado received funding to increase its program enrollment in early September 1991—about 2 months before the grantee’s budget year was to end on October 30. Due to the short time remaining, the grantee could not spend $89,980 of the amount awarded for expanding program enrollment. This same grantee had agreed verbally with a private company to prepare a site so that the grantee could place a modular unit on it to serve as a Head Start center. Site preparation would have involved establishing water, sewer, gas, and electrical hookups at the site. Before any work began, however, new owners took over the company and did not honor the verbal agreement between the grantee and the previous owner. It took the grantee 2 years to find another site suitable for the center, and that facility required extensive renovations. HHS’ Office of Inspector General reported in 1991 and 1993 that acquiring adequate, affordable space was a major problem for Head Start grantees attempting to expand program enrollments. Grantees told the Inspector General’s office that it can take up to a year to find suitable space that then may have to be renovated. Strict construction licensing requirements and delays in license approval could also slow spending for center construction or renovation. The Inspector General reported that space problems were most prevalent among grantees funded to increase enrollment by more than 200 children. The grantees believed that being notified at least 6 months in advance of funding disbursements would help to alleviate this problem. Head Start grantees interviewed by the Inspector General’s staff also said that receiving expansion funding late in the budget year results in carryover fund balances. After expansion, more than twice as many grantees interviewed had carryover balances of over $50,000. Many grantees believe that even with adequate lead time large expansions should not occur annually. According to the grantee files we reviewed, unspent balances sometimes occurred for reasons other than small budget variances or timing issues. On the basis of information included in grantee files and discussions with regional office program officials, we found, for example, that unspent balances occurred because grantees experienced accounting or management problems during 1 or more years, depended on large government bureaucracies, such as New York City’s, to provide certain goods and services, which often slowed program expenditures; or assumed the program operations and accounts of a former grantee. Also, unspent balances may have occurred for a combination of reasons described above. In other cases we could not determine the reason for grantees’ unspent balances on the basis of file information or discussions with Head Start regional office officials. Unspent balances occur when a grantee’s total award differs from the amount the grantee spent during its budget year. As previously stated, these unspent funds may be carried over into a subsequent grantee budget year. For our analysis, we defined carryover funds as any unspent funds used to either offset or add to a grantee’s award during a subsequent budget year. Carryover funds are not always added to or offset in the year immediately following the year the unspent funds occurred. For example, a grantee in Florida with $45,913 in unspent funds in grantee budget year 1992 did not have this amount totally added to or offset as carryover funds in grantee budget year 1993. In fact, $45,759 was added to its budget year 1993 award and the remaining $154 was used to offset the grantee’s budget year 1994 award. A grantee in Minnesota, on the other hand, had $3,840 from grantee budget year 1993 added to its budget year 1995 award. Yet, a Michigan grantee had its entire grantee budget year 1992 unspent balance of $1,568 offset as carryover funds in 1993. On the basis of our analysis of grantee files, we found that in grantee budget year 1993 HHS added about half of all carryover funds to grantees’ awards as TOA and the remaining proportion of carryover funds was offset as NOA. Of the grantees in our sample with TOA in grantee budget year 1993, the unspent funds added to grantee awards ranged from $10,900 to $533,500 and averaged approximately $96,000. If we had included the grantee representing New York City in our calculation, the upper end of this range would have been about $4.2 million. NOA for the same period ranged from $59 to $664,700 and averaged about $39,000. In grantee budget year 1994, we found that about three-fourths of carryover funding was added to awards as TOA, and the remainder was offset as NOA. Of the grantees in our sample with TOA in grantee budget year 1994, the amount of unspent funds added to grantee awards ranged from $3,200 to $2.4 million and averaged about $197,400. NOA for the same period ranged from $17 to $621,000 and averaged approximately $58,600. This trend appears to continue in grantee budget year 1995, though data for this year were incomplete when we performed our final calculations in October 1995. We found that HHS generally adds to or offsets grantee carryover funds within 2 grantee budget years after an unspent balance occurs. For example, for both grantee budget years 1993 and 1994, we found that about 90 percent of carryover funds added to grantee awards was 1 year old, and the remainder was from 2 to 3 years old; and from about 70 to 90 percent of carryover funds offsetting grantee awards was from 1 to 2 years old, and the remainder was 3 or more years old. Because Head Start carryover funds are generally spent in 2 grantee budget years but are available for up to 5 fiscal years following the fiscal year in which they are initially awarded (31 U.S.C., sec. 1552(a)), we asked Head Start regional office officials why certain carryover balances were reprogrammed or offset as long as 3 or more years after an unspent balance occurred. Regional office officials gave the following administrative and grantee-specific reasons: Regional office staff may not process grantee files in a timely manner due to grantee or staff errors, delays in data entry, staff turnover, large workloads, and differences in staff competence. Final forms documenting carryover balances are not due from grantees until 90 days after the budget year’s end. Incorrect carryover balances may not be caught immediately because independent auditors may take up to 13 months to complete an audit of a grantee’s program accounts for a given year. Actions, such as reprogramming or offsetting carryover balances, could be suspended if a grantee appeals an HHS decision to disallow funding. A grantee’s bankruptcy proceedings delayed a regional office from offsetting certain carryover funds. For grantee budget years 1993 and 1994 combined, we estimated that carryover funds totaled $139 million. Of this amount, carryover funds added to grantee awards (TOA) totaled $97 million and those offsetting grantee awards (NOA) totaled $42 million. We focused our analysis of intended use on the TOA portion because NOA has no identifiable intended purpose. On the basis of our review of Head Start grantee files, the intended use of a large proportion of Head Start carryover funds from grantee budget years 1993 and 1994 combined was to be used for expanding program enrollments and renovating or buying facilities. Of the $97 million of TOA carryover funds, the intended use of 40 percent of these funds was for expansion and 37 percent was for facilities. Data from the files indicated that about 23 percent of the total TOA for these years was reportedly to be used for capital equipment, supplies, and other purposes such as staff training and moving expenses. Data were incomplete for grantee budget year 1995. We found that grantees in our sample with TOA in grantee budget years 1993 and 1994 combined to be used for facilities ranged from $901 to $611,000 and averaged approximately $116,000. TOA reportedly to be used for expansion ranged from $4,200 to $2.4 million and averaged about $296,000. In summary, although overall program outlay rates remained stable during a period of intense program growth (fiscal years 1990-95), Head Start grantees accrued increasingly larger average unspent balances in grantee budget years 1992 through 1994. Depending on the size of grantees’ awards, their reported unspent balances in those years ranged from as little as $2 to about $2 million. On the basis of Head Start files, we determined in most cases that these unspent balances resulted from (1) small differences between grantees’ budget estimates and actual expenditures; (2) grantee problems renovating or constructing facilities, which delayed planned expenditures; and (3) the receipt of supplemental funding by grantees late in their budget year, which made it difficult for grantees to spend their funds before year’s end. Of the unspent funds added to grantee awards in budget years 1993 and 1994 combined, we found that grantees planned to use these dollars for increasing local program enrollments and buying or improving program facilities—activities that grantees often do not complete in a single year. As arranged with your office, we will make copies available to the Secretary of Health and Human Services and other interested parties. We will also make copies available to others on request. Please contact Fred E. Yohey, Assistant Director, on (202) 512-7218 or Karen A. Whiten, Evaluator-in-Charge, if you or your staff have any questions. Other GAO contributors to this report are listed in appendix III. We designed our study to collect information about the extent and nature of Head Start carryover funds. To do so, we visited a sample of Head Start regional offices and examined key documents in selected grantee files. Results are generalizable to Head Start grantees that (1) were at least 3 years old in 1994, (2) had at least some but less than $60 million in new funding in 1994, and (3) were located in 10 of the 12 Head Start regions. Our work was performed between June and October 1995 in accordance with generally accepted government auditing standards. We reviewed grantee files for a nationally representative sample of Head Start grantees. We focused our efforts on grantee budget years that ended in 1992 through 1995, examining file documents at selected Head Start regional offices. To generate national estimates, we employed a two-stage cluster sampling strategy. The Head Start regions constituted the first stage of the sample. Of the 12 Head Start regions, 2 are operated from the Department of Health and Human Services headquarters in Washington, D.C.—1 for Native Americans and the other for migrant workers. Because these regional offices share a unique relationship with headquarters, they were not included in the regions to be sampled. We organized the 10 remaining regions by the amount of grantee new funding received in federal fiscal year 1994, separating them into three groups or strata: regions with new funding of $500 million or more; regions with new funding of $200 to $499 million; and regions with new funding of less than $200 million. Table I.1 shows our population of regions. Total fiscal year 1994 new funding (dollars in millions) We then selected a sample of regions in each strata using a random number generator program. Table I.2 shows the regions selected in our sample. Total fiscal year 1994 new funding (dollars in millions) Stage two of the sample consisted of individual Head Start grantees. Head Start had 1,270 grantees in the 10 regions in fiscal year 1994. Because we were reviewing 2 to 3 years of data, we excluded any grantee not in existence at least 3 years. We also excluded all grantees with no new funding in fiscal year 1994. This reduced the number of grantees in our population to 1,201. We organized grantees in our sample regions by fiscal year 1994 new funding and put them into four strata: those with fiscal year 1994 new funding of less than $1 million; those with $1 million or more but less than $3 million; those with $3 million or more but less than $5 million; and those with $5 million or more. We then selected a random sample of grantees in each strata. Table I.3 shows the distribution of grantees by strata of our population and sample. Once the fieldwork was completed and records evaluated, we determined that one very large grantee with fiscal year 1994 new funding of $60 million or more was, because of its complexity, unique and required special handling. Therefore, we set aside this one grantee—The City of New York Human Resources Administration, Agency for Child Development. We did not include data collected from this site in our overall estimates but used the data as a case study of a very large grantee. By eliminating the very large grantees, we reduced our population further by 4 grantees to 1,197, thereby reducing our sample from 108 to 107 grantees. Our findings, therefore, are representative of grantees in the 10 Head Start regions that are at least 3 years old with at least some but less than $60 million in fiscal year 1994 new funding. We provided the list of sample grantees to each selected regional office, which collected records for our review. We examined key documents from the files and summarized the information using a data collection instrument. Data elements we collected included the number of service years for a selected grantee; total federal funds authorized for specific funding periods; the unspent balance of federal funds for specific funding periods and its intended usage; and the amount of carryover funds added to or offsetting grantee awards in grantee budget years 1993, 1994, and 1995 by type and source year. To link source year with carryover funds, we gathered information from the Financial Assistance Award form, which identifies the grantee service year in which the unspent funds occurred. Once data collection was complete, we compiled and merged the data. Data elements were verified and traced to documents maintained in the grantee files for 91 percent of the cases. We then computed weights to produce national estimates from our sample and calculated analytic variables. To calculate the age of carryover funds, we subtracted the source year from the grantee’s current service year. The Head Start grantee funding process presented unique data collection challenges. We made no attempt to capture the fiscal year funding. Rather, we used each grantee’s budget year ending date to guide our compilation of financial data. Because our analysis is based on data from a sample of grantees, each reported estimate has an associated sampling error. The size of the sampling error reflects the estimate’s precision; the smaller the error, the more precise the estimate. The magnitude of the sampling error depends largely on the size of the obtained sample and the amount of data variability. Our sampling errors for the estimates were calculated at the 95-percent confidence level. This means that in 95 out of 100 instances, the sampling procedure we used would produce a confidence intervalcontaining the population value we are estimating. Some sampling errors for our dollar estimates are relatively high because dollar amounts vary substantially. Sampling errors also tend to be higher for those estimates based on a subset of sample cases. For example, estimates of the mean and total amounts of grantee unspent balances are based on fewer than the 107 grantees in our sample and have large sampling errors. Therefore, these estimates must be used with extreme caution. For a complete list of sampling errors for dollar estimates and proportions in this report, see tables I.4 and I.5, respectively. Number of sample grantees contributing to estimate +/-$82,904103 +/-$98,912,337 +/-$102,334 +/-$123,885,671 +/-$102,618 +/-$122,810,684 +/-$19,304 +/-$15,412,760 +/-$41,297 +/-$32,225,320 +/-$36,568 +/-$27,732,575 1993/1994 Carryover funds offsetting grantee awards (NOA) +/-$25,705 +/-$18,371,913 1993/1994 Carryover funds added to grantee awards (TOA) Estimated proportion (percent) Sampling error (percentage points) Grantees with unspent balances all 3 years Unspent balances as a percent of total Amount of unspent as a percent of award Unspent balances due to small budget variances Unspent balances due to timing issues Unspent balances due to other reasons Unspent balances due to unknown reasons (continued) Estimated proportion (percent) Sampling error (percentage points) Because we wanted to obtain general information about the extent and frequency of Head Start carryover funds, we limited our investigation to reviewing grantee records maintained at HHS’ Atlanta, Chicago, Dallas, Denver, and New York regional offices. We gave officials at these regional offices an opportunity to review the accuracy of the data we collected and subsequently used to develop our estimates. We did not contact individual grantees to verify records nor did we visit grantee sites. We did not follow the flow of funds to determine if program abuses had occurred nor did we make any attempt to determine whether program grantees actually used the funds for the purposes intended. North Wilkesboro, N.C. Hardinsburg, Ky. Fort Lauderdale, Fla. Huntsville, Ala. Cheraw, S.C. Chattanooga, Tenn. Tuscaloosa, Ala. Jacksonville, N.C. Savannah, Ga. Monticello, Ga. Williamston, N.C. Brooksville, Fla. Montgomery, Ala. Jacksonville, Fla. La Grange, Ky. Florence, S.C. (continued) Eatonton, Ga. Lucedale, Miss. Cartersville, Ga. Ashland, Miss. Logansport, Ind. Coldwater, Mich. Washington Court House, Ohio Stevens Point, Wis. (continued) Rockford, Ill. Greenville, Mich. Scottville, Mich. Oklee, Minn. Grand Rapids, Mich. Alpena, Mich. Janesville, Wis. Port Huron, Mich. Rushford, Minn. East St. Louis, Ill. Zumbrota, Minn. Rock Falls, Ill. NA - Information not available. (continued) Stonewall, Tex. Winnsboro, La. Bay City, Tex. NA - Information not available. NA - Information not available. (continued) Kingston, N.Y. Brooklyn, N.Y. NA - Information not available. The following individuals made important contributions to this report: Robert Rogers and Karen Barry planned this review, and Karen managed the data collection. David Porter and Lawrence Kubiak collected much of the data from the HHS regional offices. Patricia Bundy also helped to collect data, conducted follow-up discussions with HHS headquarters and regional office officials, and assisted with report processing. Dianne Murphy drew the sample and performed the analysis. Steve Machlin calculated sampling errors. Harry Conley and Michael Curro provided technical assistance, and Demaris Delgado-Vega provided legal advice. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) amount of Head Start funding unspent by program grantees at the end of budget years 1992 to 1994 and the reasons for these unspent funds; (2) proportion of carryover funds that were added to grantee awards and that are 1 or more budget years old; and (3) grantees' intended use of carryover funds. GAO found that: (1) about two-thirds of the grantees reviewed had unspent balances of $69,000 to $177,000 during budget years 1992 through 1994; (2) most of the unspent balances resulted from small differences between grantees' budget estimates and actual expenditures, problems related to building Head Start centers, and grantees' inability to spend their awards because of the Department of Health and Human Services (HHS) disbursement problems; (3) one-half of all the carryover funds in budget year 1993 and about three-fourths of the carryover funds in budget year 1994 were added to grantee awards in subsequent budget years; (4) about one-half and one-fourth of carryover funds in grantee budget years 1993 and 1994 offset grantee awards; (5) Head Start offset 70 to 90 percent of its grantee awards with carryover funds within 2 budget years of an unspent balance; and (6) carryover funds added to grantee awards were used to expand Head Start enrollments, build new facilities, purchase capital equipment and train staff.
OSD has issued guidance to the military departments for reporting public- private workload allocations required by 10 U.S.C. 2466. The guidance is consistent with 10 U.S.C. 2460, which defines depot maintenance and repair. The guidance requires the comprehensive reporting of all work associated with the overhaul, upgrade, or rebuilding of parts, assemblies, and subassemblies and the testing and reclamation of equipment, regardless of the source of funds or the location at which maintenance is performed. It also requires the reporting of software maintenance, interim contractor support, and contractor logistics support, to the extent work performed in these categories is depot maintenance. In recent years, the Department of Defense has implemented acquisition and logistics policy initiatives that have shifted depot maintenance workloads from the public to the private sector. We recently reported that between 1987 and 2000, the public sector’s share of depot maintenance work declined by 6 percent, while the private sector’s share increased by 90 percent. As the military departments move closer to the 50-percent ceiling for private sector work, with the Air Force exceeding the ceiling, the accuracy of the collection and aggregation of 50-50 data becomes increasingly important. The data in the prior-years report are important because they provide the best indicators the military departments have of the current public-private sector allocations. While we have said that the future-year data provide a rough estimate, the data are the Department’s only predictor to indicate that management attention may be needed to avoid potential compliance problems. Table 1 provides a consolidated summary of DOD’s two reports to the Congress on depot maintenance public and private sector workload allocations, dated February 1, 2001 (prior- years) and April 1, 2001 (future- years). The amounts shown are actual obligations incurred for depot maintenance work in fiscal years 1999 and 2000 and projected obligations for fiscal years 2001-2005 based on the defense budget and service funding baselines. The percentages show the relative allocations between the public and private sectors. The Department had mixed results complying with the 50-50 prior-years requirement for fiscal years 1999 and 2000. Based on our review, the Army met the requirement for both 1999 and 2000, while the Air Force met the requirement in 1999 but exceeded it in 2000, issuing a national security waiver as provided for in the statute. Although we identified some errors in the Army and Air Force prior-years data, improved guidance and data collection efforts and validation and reviews by their audit agencies indicate that the Army and Air Force processes provide a sufficient basis for evaluating compliance. Further, in the Army’s case, the errors were not material in the context of the 50-50 requirement because they would not be sufficient to cause it to exceed the limitation. With respect to the Air Force, the cumulative errors further support the need for the Air Force waiver. However, because of data reliability issues and concerns about management controls and data validation, there is insufficient support for us to determine the Navy’s compliance. Based on our review, the Army met the 50-50 requirement in both 1999 and 2000, with about a 53 to 47 percent public-private sector split in both fiscal years 1999 and 2000. Our review of the Army’s prior-years 50-50 report, internal reviews by auditors, and improved guidance and direction of the data collection effort indicate that the Army process provides a sufficient basis for determining that it met the 50-50 requirement during this 2-year period. While we identified errors that would increase private sector costs, about 1 percent in fiscal year 2000 above that projected by the Army, this would not be material to the 50-percent requirement since the Army would still be about 3 percent below the ceiling. Table 2 shows our adjustments to the Army data to correct for the errors we identified and the resulting impacts on the public-private sector allocations. The errors and their impacts are as follows: One reporting activity did not report about $24 million in government- furnished material costs in fiscal year 2000. According to OSD and Army guidance, the costs of government-furnished material supporting work performed by contractors are to be reported as private sector costs. Other Army activities we reviewed properly reported government-furnished material costs. While we determined that the reliability of the Army data was enhanced by the work of the Army Audit Agency, the Army had not incorporated all the adjustments recommended by the Army Audit Agency before the 50-50 data were submitted to OSD and subsequently to the Congress. Auditors reviewed more than one-half of the reported dollars submitted to Army headquarters for fiscal year 2000, identifying errors in about 5 percent of the items. Army officials revised the 50-50 data to correct for about $70 million of the errors identified; but one activity—citing time constraints—did not make another $21.9 million in adjustments for errors before the Army submitted its report to OSD. Adjusting for these errors adds $20.4 million to the private sector total and $1.5 million to the public sector total. We identified several other small errors that in total would add $5.8 million to the public sector in fiscal year 1999 and subtract $4.6 million from the private side in fiscal year 2000. These were attributable to officials using budgeted requirements instead of actual obligations, double counting, and other mistakes. As previously noted, we recognize there are some systemic problems with DOD’s financial data. However, based on the work of the Army Audit Agency and our work, we believe the Army process provides a sufficient basis for determining that it met the 50-50 requirement during the 2-year prior-year period of fiscal years 1999 and 2000. Based on our review, the Air Force met the 50-50 requirement in fiscal year 1999, with a 54 to 46 percent public-private split. However, the 48 to 52 percent public-private sector split in fiscal year 2000 required that the Secretary of the Air Force issue a national security waiver and notify Congress. We identified some reporting weaknesses that increased the amount the Air Force exceeded the ceiling in fiscal year 2000 from the reported 1 percent to about 2 percent; and correcting for these weaknesses resulted in increasing the private sector share in both fiscal years 1999 and 2000. Consequently, the weaknesses were not material in the context of meeting the 50-50 requirement since correcting for these weaknesses did not cause the Air Force to exceed the limitation in fiscal year 1999 and it had already reported exceeding the limitation in fiscal year 2000. While recognizing some problems in the data, the Air Force data review process added to the reliability and the credibility of the reported data and, together with our work, serves as a basis for determining compliance with the 50-50 requirement. The 50-50 data reported to Congress included adjustments made as a result of reviews by the Air Force Audit Agency, Air Force Materiel Command, and Air Force Headquarters. The auditors identified adjustments amounting to 4.3 percent of the total workload. This rate is higher than last year’s and follows several years of gradually declining adjustment rates. The higher rate was mainly attributed to rather large errors in one acquisition program, late posting of a cash transfer into the working capital fund, and high turnover in the staff assigned to collect data. Table 3 identifies the changes to the reported Air Force report after adjusting to correct weaknesses we identified. The reporting weaknesses and their impacts are as follows: As in past years, Air Force officials continue to adjust the 50-50 data for general and administrative expenses associated with managing depot maintenance contracts. These amounts are for overhead and salary costs incurred by government personnel charged with administering depot maintenance contracts funded through the working capital fund. Air Force headquarters and Materiel Command officials subtract these amounts ($57.9 million in fiscal year 2000) from the private sector costs—where they are accounted for within the working capital fund—and add them to the public sector costs for 50-50 purposes. Air Force officials told us that they believe these costs should be reported as part of the public sector since government employees incur them. Although this type of cost is not specifically addressed, OSD 50-50 guidance requires that the costs for all factors of production—labor, material, parts, indirect, and overhead—associated with a particular repair workload should be counted in the sector accomplishing the actual maintenance. For example, contract maintenance on depot plant equipment used by government employees to repair items should be counted as public sector costs because they are incurred by the government in producing the repair. Thus, consistent with our prior assessments, we continue to believe that it is appropriate to count the general and administrative costs associated with administering depot maintenance contracts as part of the private sector costs of doing business. Accordingly, in table 2 we reversed the Air Force adjustments to again report them as private sector costs. We also identified unreported contractor logistics support costs totaling about $2.3 million in both fiscal years 1999 and 2000. The Joint Stars program office reported most of its depot maintenance dollars appropriately but did not report technical data support costs. Additionally, one office in the Special Operations Directorate did not report contract maintenance costs that should have been reported based on Air Force 50-50 guidance. As previously noted, we recognize there are some systemic problems with DOD’s financial data, but the Air Force has made improvements in the 50-50 reporting data processes since the requirement to submit the report to Congress was imposed. Based on the past improvement efforts and the more recent work of the Air Force Audit Agency and us, we believe the Air Force provided a sufficient basis for determining that the Air Force met the 50-50 requirement during fiscal year 1999 and exceeded it in fiscal year 2000, but submitted a waiver as provided by statute. As authorized under section 2466, on January 11, 2000, the Air Force notified the Congress that the Secretary was waiving the applicability of the 50-percent limitation on private sector contracting for fiscal year 2000 for reasons of national security. The Air Force explanation for the waiver was based primarily on the need to use temporary contracts to support transitioning workloads from closing depots. However, we reported previously that the temporary contracts represented only a minor share of the Air Force contract workload. We noted that the more significant factors were previous Air Force actions, such as the increase of long-term depot maintenance contracts from $600 million in 1996 to $1.1 billion in 2000 and the transfer of about half of the workload from two closing depots to the private sector, had increased the private sector share from 36 percent in fiscal year 1991 to the 50-percent ceiling in fiscal year 2000. These actions left Air Force officials little flexibility to use emergency contracts without exceeding the ceiling for contract depot maintenance work. We also pointed out that while the Air Force projected that it would exercise management changes to remain within the 50-percent limitation in fiscal year 2000 and beyond, it was uncertain whether it would be successful in these efforts. Because of weaknesses in the Navy’s reporting processes and data, we were unable to determine whether the Navy complied with the 50-percent requirement for the 1999 and 2000 prior-years report. The Navy’s report, as presented, indicates that the Navy complied with the requirement in both years, with a 57-43 percent mix in fiscal year 1999 and a 55-45 percent mix in 2000. However, our review of the Navy 50-50 data for fiscal years 1999 and 2000 identified concerns about management controls and data validation processes. Also, Navy leadership chose not to use the Naval Audit Service as a part of this year’s process, and the absence of this review added to our concern about the reliability of the Navy data. The specific process problems, which are discussed in more detail later in our overall analyses of the military services’ data reporting processes, include (1) a decentralized and tiered reporting process that consolidated 50-50 numbers into summary reports with little evidence that the data were checked and validated while passing through the reporting layers and (2) the inability to track and document the estimating methodologies. While we have identified these same issues in the past, Navy leadership has not placed enough emphasis on the 50-50 reporting process to make the improvements required to assure that the data reported provide a sufficient basis for developing the Navy’s 50-50 information in the prior- years report. For example, the Naval Sea Systems Command, which reported about one- fifth of the total Navy 50-50 amount for this time period, had one coordinating official who received summary data from 37 reporting units. Some units had in turn rolled up data received from as many as 10 subactivities. Individuals generally accepted the data, rolled up totals, and transmitted them up the reporting chain without conducting in-depth critical checks of the data. In most cases, an audit trail to track individual subactivity 50-50 submissions that were subsequently rolled-up into a single program or project unit figure did not exist. In addition to our concerns about the Navy’s process, we identified one major reporting inconsistency that would impact the public-private sector allocations as shown in table 4. We determined that two activities within the Naval Sea Systems Command reported inactivation activities inconsistently. One project office reported $650 million in nuclear ship inactivation costs performed mainly at the public shipyards, but another project office did not report about $113.6 million in conventional ship inactivation costs performed mainly in the private sector ($81.8 million in fiscal year 1999 and $31.8 million in fiscal year 2000). Officials from the non-reporting office said they did not know these types of costs were to be reported for 50-50 purposes. It is uncertain how many other activities have similar non-reported costs. DOD’s financial management regulation includes inactivation activities as reportable depot maintenance workloads. The internal Navy guidance specified reporting nuclear ship inactivations but did not mention conventional ships. In commenting on a draft of this report, Navy officials said that the relatively complex process for nuclear ship inactivations is considered to be equivalent to depot maintenance and repair, while the less-complex process for conventional ship inactivations is not generally considered equivalent to depot-level work. However, we believe that some portion of the conventional ship workload should be reported as depot maintenance, based on follow-up reviews with Navy program officials. OSD is developing additional guidance to clarify this reporting category for future 50-50 reports. The projections of the Army, Air Force, and Navy in DOD’s future-years’ report for fiscal years 2001 through 2005 are not reasonably accurate estimates of the future allocations of public and private sector workloads. The services’ management placed much less emphasis on the future-years data and reports. The reported projections are based in part on incorrect data, questionable assumptions, and some inconsistencies with existing budgets and management plans. Further, our review identified errors, inconsistencies, and other shortcomings. As a result, DOD’s future-years report should be viewed with caution because it does not provide the best data available to DOD decisionmakers and congressional overseers, and the reported data is misleading with regard to how future workloads are likely to be allocated between the public and private sectors. To make the future year 50-50 report a useful management tool would require improved management oversight and direction. While the reported Army future years’ workload allocations show an increasing public sector share, after adjusting the reported numbers to correct for errors and omissions, the net effect is an increase in the projected private sector’s share for each year. In our last year’s report, we noted that the Army faced long-term challenges remaining within the contract ceiling. Army officials said that they had taken action to increase the public sector’s share, not only to deal with the contract workload ceiling but also to more cost-effectively use underutilized Army depot infrastructure. The reported Army numbers suggested that these actions were effective. However, we identified significant problems and omissions in the Army’s reporting of its future-years 50-50 data, the net effect of which significantly increases the projected private sector’s share each year. Whereas the Army reported that public sector workloads were projected to increase in the future years, in actuality, after correcting for errors and omissions, the data show that the Army is substantially closer to the private sector limitation and that the future expected public-private sector allocations are relatively constant. Table 5 summarizes adjustments we made to the reported future year Army numbers. The errors and their impacts are as follows: One reporting activity made a series of transcription errors that reported thousands of dollars as millions of dollars. These errors overstated public sector workloads by a total of $683.5 million in fiscal years 2003 –2005. This same activity did not fully report contractor costs for contractor logistics support, which resulted in understating private sector workloads by a total of about $227.1 million in fiscal years 2001–2005. Two major commands did not report depot costs associated with the Army’s Integrated Sustainment Management program. Officials said they did not report these costs because the Army Materiel Command will eventually be responsible for the reporting. However, a Materiel Command official told us the command was not ready to assume this reporting and did not include the costs of these activities in the 50-50 report. These additional projected costs are estimated to add $24 million per year to the public sector reporting and $56 million per year for the private sector. As we reported in the past, the Army’s move to consolidate maintenance activities under the National Maintenance Program and to perform depot-level maintenance at field locations continues to pose reporting challenges and could result in an underreporting of both public and private sector costs. As discussed in the section on the prior-years report, one Army reporting activity had not reported the costs of government-furnished material. This material, when supplied to a contractor for use in the maintenance process, should be counted as private sector work. Assuming annual material costs stay constant at the fiscal year 2000 level ($24 million), this activity underreported private sector costs by about $120 million over the 5-year reporting period. Another activity double-counted $19.9 million dollars of its public sector workload and $36 million of its private sector workload for fiscal year 2004. Estimates for maintenance costs for 7 systems were erroneously entered twice for 2004. There are some additional non-quantifiable factors that are expected to have major impacts on the Army’s future depot maintenance program and these factors further support our concern (1) with the reasonableness of the Army’s future year projections and (2) that the Army will be challenged in the future to manage its depot maintenance program within the 50-50 ceiling. First, quality of the future-years data is questionable because some funding priorities and plans have substantially changed since the budget figures supporting the 50-50 projections were prepared. Revised plans for extensive repairs of the Patriot Missile and the Apache Helicopter will likely alter depot workload projections and public-private sector percentage allocations. A second factor of potentially greater impact involves depot maintenance requirements associated with the Army’s recapitalization program.Funding requirements and implementation strategies for the recapitalization program are not fully known at this time and continue to evolve. Consequently, these requirements and expected public-private sector allocations were not fully reflected in the 50-50 outyear projections. At the time of our review, Army records showed that it planned to spend about $15.5 billion on recapitalized systems between fiscal years 2002 and 2007. Army officials said, however, that additional funding of at least $7.6 billion is needed over this period but this has not yet been budgeted.Procedures for managing and coordinating the recapitalization program were finalized in April 2001, and detailed implementation strategies are currently being developed for each of the 21 weapon systems to be recapitalized. While it appears that a significant portion of the program expenditures will be considered depot maintenance-type work, the plans supporting the distribution of workloads between the public and private sectors have not yet been finalized. Although the reported data indicate that the Air Force would breach the 50-percent ceiling in fiscal year 2001 but not in fiscal years 2002- 2005, we identified significant problems and questionable assumptions in the Air Force’s reporting of its future year 50-50 data, which indicate that the future contract work is understated. After adjusting the reported numbers to correct the problems we could quantify, the net effect is an increase in the private sector share projected for each year. In addition, other significant factors cannot be quantified, but indicate further growth in the private sector is likely. Taken together, the Air Force will likely continue to exceed the 50-percent private sector limitation, leading to more waivers such as the second one recently issued for fiscal year 2001. Table 6 shows the reported data, our quantifiable adjustments, and the resulting impacts on allocations. The revised allocations show the Air Force just under the 50-percent ceiling for the future years. Significantly, the reported and revised amounts are both over the 48-percent target for private-sector allocation that Air Force officials established for management purposes to allow for some flexibility under the ceiling if estimates and circumstances change. The reporting weaknesses, errors, and their impacts are as follows: As discussed in our analysis of the Air Force’s prior-year numbers, the Air Force’s adjustments for general and administrative expenses associated with contracted workload shifts more than $100 million annuallyadding about $50 million to the public sector and subtracting the same amount from the private sector. We think it more appropriate to count these expenses as part of the private sector amounts. They are overhead costs required to manage contract workloads; and the OSD reporting guidance says that the costs for all factors of productionlabor, material, parts, indirect, and overheadassociated with a particular repair workload should be counted in the sector accomplishing the actual maintenance. In table 6 we reverse the contract administration adjustment each year. This adds about $263.1 million to projected private sector work for the reporting period and decreases the public sector by the same amount. The Air Force data do not fully reflect depot officials’ estimates of the continuing need for temporary contracts and contractor augmentees resulting from work transfers from base closures and contract competitions. At the time the future-years report was submitted, these would have added about $21.1 million to the private sector costs for the fiscal year 2001-2004 reporting period and decreased the public side by the same amount. The Air Force data do not include the estimated repair costs of new systems and upgrades being reviewed for establishing the source of repair. Most of these items 48 of 66 and representing about 90 percent of the repair cost estimates for all the systems undergoing the revieware currently recommended for private sector repair. Examples include the C-5 AMP repair, which is expected to add about $57 million in private sector repair costs and the KC-10 reverser fan modification, which is expected to add $41 million in private sector repair costs between fiscal years 2001 and 2005. Although some of these source-of-repair decisions could change, as they stand now, the net effect of the additional workloads would add about another $175 million to the private sector costs for fiscal years 2002 through 2005. In commenting on a draft of this report, Air Force officials pointed out that it is difficult to accurately project the outcome of the source of repair process and that the annual dollar projections are very rough budget estimates. However, all of the data projected in the future-years report are point-in-time estimates and subject to change. Incorporating estimated costs for systems in the source-of-repair process would provide a more comprehensive and useful projection of expected future public-private sector allocations. We also identified underreported contractor logistics support costs and contractor costs to install modifications totaling about $8.2 million. Program officials at one activity were unaware of the 50-50 reporting requirement, while other offices reported some, but not all, of the depot-related contract costs. Officials in the Towed Decoy program office said they did not know that the costs for installing modifications should be reported as depot maintenance costs. The Joint Stars program office did not report technical data support costs and the Special Operations office did not report contract maintenance costs. As we have reported in the past, it is difficult to identify to what extent costs that should be reported are being reported. However, based on our review this year, we noted improvements in reporting these kinds of contract costs. In addition to the problems we could quantify, there are some other significant factors which, while they cannot be quantified, are likely to have major impacts on future-year workloads as the Air Force moves closer to execution. Both public and private sector amounts can be affected. However, based on past experience, the Air Force’s practice of placing more depot repair work for new and upgraded systems in the private sector, and the unlikely event that the in-house depots will receive the projected amounts of work currently estimated, it is likely that the Air Force will continue to need to waive the 50-percent limitation in the future, absent corrective action. One of the non-quantifiable factors is the relative accuracy of cost and budget data reported by the depot maintenance activity group, a part of the Air Force working capital fund. From 70 to 80 percent of the total depot maintenance amounts reported annually by the Air Force are financed through the depot maintenance activity group. Consequently, the relative accuracy of budget projections and accounting records for the activity group will significantly affect the quality and completeness of the Air Force’s 50-50 data. However, our review last year of the depot maintenance activity group identified poor budget estimates, inaccurate pricing, and overoptimistic assumptions about worker productivity and process improvement savings. In addition, the rates charged for maintenance work and workload assumptions used in estimating fiscal years 2003-2005 requirements did not incorporate price increases for both public and private workloads, nor did they fully reflect expected surcharges and other workload changes. In recent years, operating losses in the depot maintenance activity group have necessitated large cash inflows and surcharges applied to maintenance rates to balance the accounts. For example, a reimbursement of $483 million ($417 million for the public sector and $66 million for the private sector) was spread over fiscal years 2001 and 2002 to make up for accumulated losses in the Depot Maintenance Activity Group operations for fiscal year 2000 and prior years. Performance indicators to date indicate that financial performance problems are continuing in fiscal year 2001 that could require changes in future rates and surcharges. Also, the apparent improvement in the 50-50 ratio for fiscal year 2002 shown in DOD’s report and table 1 in this report is due largely to a 17-percent rate increase on public workloads. However, it is questionable whether the projected public workload program can be funded at the higher rates. Depot officials noted that a price increase of that magnitude would likely result in a reduction in the amount of maintenance the operating commands would be able to fund. They said that budget estimates used to support the projections assumed that 100 percent of the anticipated workload would be accomplished. In reality, actual performance generally does not approach 100 percent, resulting in an overstatement of the reported 50-50 data. Officials at the three Air Force depots said that historically in the year of program execution, there are reductions in the depot maintenance program performed in the public depots and an increase in the amount of contracted workload over what had been projected. An official at one depot said that only about 94 percent of the current program would likely be accomplished in fiscal year 2001 due to operating inefficiencies, parts shortages, budget reductions, and other constraints. Air Force Materiel Command headquarters officials said this lower execution rate can affect both public and private workloads, but it generally has a greater impact in reducing the actual work accomplished in public depots. Changes in the amount of workload that is actually accomplished will have obvious impacts on the dollars and public-private sector allocations reported in the future. The Air Force future-years report shows the Department exceeding the 50-percent ceiling in 2001. On July 31, 2001, the Air Force notified Congress that the Secretary of the Air Force had waived the 50-percent requirement for fiscal year 2001. The waiver determination was justified as necessary for national security because “the Air Force concluded that no significant workload could be moved into the public depots in the near term without increased cost and an adverse effect on readiness.” While we did not analyze the basis for the Air Force’s determination, we agree that those transitioning workloads, whether new or old ones, to a military depot would require increased funding. The Air Force waiver determination also said that to address future compliance, the Air Force is preparing a long-term strategic plan that will address current capacity shortfalls, as well as new technologies and the associated infrastructure. This long-term depot strategy is supposed to be designed to ensure compliance with the 50-percent limitation. However, the Air Force promised such a plan last year but has not been successful in developing it. After announcing on January 11, 2000, that the Air Force would exceed the 50-percent limitation last year, the Air Force told interested Congressional members and the Subcommittee on Readiness, Senate Committee on Armed Services, that it was developing a short- and long-term strategy for resolving the 50-50 dilemma. According to Air Force officials, they were unsuccessful in identifying workloads that could be moved into the Air Force depots in the short term and were equally unsuccessful in identifying workload to transfer into the depots in the long term. Program offices said they had entered into long-term contracts with private contractors and they had not budgeted for technical data or depot plant equipment and facilities that might be required before establishing an in-house capability in one of the three remaining Air Force depots. Further, DOD and Air Force acquisition strategy continues to express a preference for long-term contractor logistics support, to include maintenance, supply, and other logistics functions. This year’s waiver determination did not specify a date for the completion of the Air Force’s latest effort in developing a long-term depot strategy that would resolve the Air Force’s current 50-50 imbalance. As of November 2001, it has not been successful in implementing such a plan. Without an approved strategy for increasing the Air Force depots’ workloads and the funding of the resources required to establish new capability, the Air Force will not be able to resolve its 50-50 problem. The Air Force’s long-term depot strategy is now expected to be completed by the end of calendar year 2001, according to officials commenting on our draft report. We identified two major problems in reporting that, together with our reservations about the reliability of the Navy’s data, lead us to conclude that, like the other services, the Navy’s future year projections are not accurate. The resulting impacts on public-private sector allocations after adjusting for these problems are displayed in table 7. The problems and their impacts on the allocations are as follows: As discussed in the earlier section on the prior-years 50-50 report, the Navy did not report inactivation costs for conventional ships as depot maintenance costs but reported similar work for nuclear ships. The Navy projects that up to $357 million over the 5-year period covered in the future-years report will be spent on conventional ship inactivation activities, mostly in the private sector. The Navy projections did include about $1.2 billion in nuclear inactivation workloads at public shipyards for this time period. As discussed earlier, the Navy believes that conventional ship inactivation workload is generally not equivalent in complexity to depot-level maintenance. A Navy official said they would use the additional clarifying guidance being developed by OSD in reporting future 50-50 workload allocations. We also determined that the Navy did not include the costs of repairs on the USS Cole, the target of last year’s terrorist attack. The Congress late in calendar year 2000 appropriated $150 million in fiscal year 2001 funds for these repairs, which are to be accomplished at a private shipyard. Officials said the 2001 supplemental came after the 50-50 report was developed. Officials estimate that about another $93 million will be required to complete repairs on the USS Cole, which was also not reported in the 50-50 data. For display purposes, table 7 shows this additional amount in fiscal year 2002. Several other issues affect the Navy’s report but are not quantified. For example, the Navy plans call for a substantial increase in submarine depot maintenance workloads associated with a major refueling program during this reporting period. Most of the work is expected—and was reported for 50-50 purposes—to be accomplished at the public shipyards. However, Naval Sea Systems Command officials told us that the plans and depot requirements are not yet firm and that extensive use of contract employees to augment the civilian workforce at the shipyards is anticipated. These contract requirements have not been fully identified and were not included in the 50-50 report. In commenting on a draft of this report, Navy officials stated that as soon as contract requirements for shipyard augmentation are determined, the amounts will be included in the 50-50 reports. As we reported last year, the Navy is moving to a regional maintenance approach, which has made it difficult to identify and report depot-level work. Initially implemented at Pearl Harbor, this approach combines depot-level and lower levels of non-depot maintenance, changes funding sources, and consolidates financial systems. The Navy has not yet developed a system to discretely track and account for work meeting the definition of depot maintenance. In its absence, the officials used estimates to report 50-50 data. While this is a reasonable approach, actual data will likely cause future estimates to be revised if this program is implemented as planned throughout the Navy. The Marines Corps projections for fiscal years 2001-2005 were based on a combination of budget formulation figures and straight-line projections. Headquarters Materiel Command Officials agreed that the reported data do not fully reflect the planned decrease in total revenues for this period, the impact of new systems going to the private sector for support, and the anticipated decrease in the public depot workforce. They did not provide an estimate to reflect these changes but said actions are underway to improve future reports. In its report, we observed that the Navy and Marine Corps are now projecting a substantial shift to more private sector workload over the 50-50 reporting period compared to last year. Comparing the 5 years that the current 50-50 reports and last year’s reports have in common (fiscal years 2000-2004), the Navy is now projecting an additional $2.9 billion in private sector work and a decrease of $1.1 billion in public sector work. Whereas in last year’s reports they were projecting private sector allocations in the 35- to 42-percent range, this year’s reports project a significantly higher 43- to 46-percent range. Navy officials responsible for coordinating and reporting the 50-50 data attributed the increase in private sector amounts to (1) a sharp increase in the private sector wage rates, (2) some shifts in ship maintenance from the public to private sectors to make room for the extensive submarine refueling effort to be accomplished mainly in the public sector, (3) contracts with private shipyards, and (4) changes in cost models and estimating baselines. While DOD has greatly improved the 50-50 reporting guidance and implementation, opportunities for improvement still exist. We have noted improvements in the process each year, particularly with respect to the Air Force and Army’s use of internal auditors to review data, the Navy’s development of internal guidance, and OSD’s revisions to its reporting guidance in response to our recommendations. At the same time, some problems and concerns persist, including incomplete and inconsistent recordkeeping by the services and the Navy’s inadequate data validation. While the 50-50 process and resulting data will never be perfect, there are still opportunities for DOD to take actions to improve the validity of the process and the reliability of the data. If implemented, these improvements could improve the reasonableness of the 50-50 data as input to the management of the depot maintenance program to attain future compliance with the 50-50 requirement. For this year’s 50-50 data collection, Army officials added to the already extensive and detailed internal instructions used to supplement the OSD guidance. Army officials cited our report findings and their auditor findings to improve guidance in several areas, including warranties and contractor logistics support. Also, the Deputy Chief of Staff for Logistics held two workshops to prepare for the 50-50 data call and to address our prior-year findings, OSD’s reporting requirements, as well as the changes in the Army supplemental instructions. Command individuals responsible for responding to the data call and coordinating reporting efforts within their respective commands attended the workshops. Nonetheless, we noted that guidance and reporting requirements were not always clearly communicated or understood by potential reporting activities, resulting in some incomplete and missed reporting. The Army’s task in this regard is challenging in that 14 major commands need to be involved, along with numerous reporting levels within each command. Nonetheless, improved communication of the reporting guidance to activities that may not initially recognize that they have reportable maintenance activities should mitigate the problem of incomplete and missed reporting in the future. The Army Audit Agency reviewed the data collection process at command levels as it was evolving, and problems they identified in the report for fiscal year 2000 were for the most part corrected before the activity reports were sent to Army headquarters. The error rate identified by the auditors was about one-half the rate found last year7.1 percent for fiscal year 2000 versus 15 percent in fiscal year 1999. For fiscal year 2000, Army auditors reviewed about $13 billion and identified adjustments of about $92 million. Auditors attributed the improvement to better guidance, the planning workshops, and overall management efforts. However, Army auditors put little emphasis on reviewing the future-year projections. This year’s review of the future year projections was concerned only with the process and how reporting organizations were determining projections. The Army auditors did not review or spot check the amounts projected for individual items or weapon systems, deciding that a detailed audit was not necessary since they had done a more thorough analysis of the future process and numbers last year without identifying significant errors. Nonetheless, we found significant errors that would likely have been identified by Army auditors if they had spot-checked projected amounts. The Air Force supplemented OSD guidance by adding details on contractor and interim contractor logistics support contracts, partnering, and software maintenance to its internal instructions. For the fourth consecutive year, the Air Force Audit Agency assisted Air Force headquarters and Materiel Command officials in verifying data and validating collection processes, significantly improving the quality and completeness of data before its submission to OSD and the Congress. However, as in the other services, some Air Force offices did not maintain adequate documentation for reviewing and supporting data. Some offices we visited could not readily reconstruct estimating methodologies and provide source documents for their reported data. For example, F-117 contractor logistics support costs and F-15/F-16 trainer/simulator costs were omitted. Documentation requirements are not only valuable for audit purposes and management review but also as a method of maintaining historical records that can be used in subsequent years. This is especially important given the high turnover of staff performing these functions. In response to our prior findings and those of the Naval Audit Service, Navy headquarters compiled and distributed a handbook with guidance to supplement the OSD reporting requirements. This handbook improved the Navy’s process as it included more detailed data collection procedures, a responsibility matrix, and a standard reporting format. Some commands also prepared additional instructions to reporting units. In addition, the headquarters official responsible for coordinating the Navy’s reports conducted on-site reviews at several commands and identified some errors that were corrected before the data was reported to OSD. However, the Navy did not hold a planning meeting to assemble the key staff involved in the 50-50 reporting process from the major commands to discuss and critically analyze procedural guidance. We have found such meetings to be useful in the Air Force and the Army in surfacing problems and concerns and helping to ensure a more consistent approach to data collection. Furthermore, the Naval Audit Service was not asked to review processes and validate this year’s 50-50 data. Last year, auditors found that Navy guidance, data validation, and documentation lacked the detail to identify, collect, support, report, and document depot-level maintenance between public and private sectors. As a result, the Naval Audit Service concluded that the quality of the data reviewed was inadequate to determine the accuracy and completeness of the prior submission for fiscal year 1999. According to audit officials, by not doing a follow-on review, the accuracy and completeness of the Navy’s 50-50 data remain suspect. We encountered similar problems during this review. The Navy’s decentralized and tiered reporting process rolls up data from numerous subactivities, consolidating 50-50 numbers into summary reports with little evidence that the data were checked and validated while passing through the reporting layers. In many cases, an audit trail did not exist which was sufficient to track and document the estimating methodologies and the data used to develop the individual subactivity 50-50 submissions that were subsequently rolled up into a single program or major activity amount. Although the Air Force and Army also have multi-layered reporting chains, we found their processes for collecting and verifying information to be generally better, especially their use of audit agencies to provide an effective third party review of the data collection process and to correct errors and validate data before submitting it to OSD and to the Congress. Our review this year, as in the past, determined that each of the services could better maintain auditable records for documenting data collection methodology, estimating techniques, and final reported results. While some central records are maintained, information and reporting rationales at program offices and maintenance activities are sometimes lacking, and it is difficult for a third party to understand and reconstruct the methodology and verify results. For example, some Air Force reporting activities had not kept consolidated records to document their data collection procedures, estimating methodologies and assumptions, and data sources. In some other instances, key reporting staff had been transferred and, without adequate documentation, new staff could not readily explain nor replicate the results. Also, a Navy major command was in the process of realigning its units for budget purposes; and neither we nor officials responsible for the 50-50 effort could always determine which project unit actually provided the data, the type of depot maintenance being performed, and the class of ship involved. We also noted that the command had developed special budget codes to help identify and track depot funding, but that the codes were not extensively or consistently used. Good records, documentation of processes followed, and identification of data sources used are important not only for audit and management oversight but also for use as a historical record that can be followed by newly assigned staff to assist in data collection and by programs reporting for the first time. Expanded guidance and the efforts of service audit agencies have improved the prior-years 50-50 report overall, but more so in the Army and the Air Force. Nonetheless, problems still exist, and this is particularly true in the Navy, where inadequate management oversight has resulted in continuing weaknesses in reporting control processes and data validation procedures. These weaknesses make it difficult to verify the reliability of reported data. Correcting reporting accuracy problems in all the services is necessary to provide the Congress and DOD managers assurance that the requirements of 10 U.S.C. 2466 are being met. Also, inaccurate data hinder Defense managers in taking timely actions to meet the statutory requirements and leave the Congress uncertain as to whether legislative requirements are being met. The future-years report is not accurate or reasonable and is not currently a useful tool for guiding DOD actions or informing the Congress about likely future compliance with section 2466 requirements. The management of the military services placed much less emphasis on ensuring the accuracy or reasonableness of the future-years data. Accurate or reasonable projections are of particular importance for the Air Force, which has now issued two waivers of the limitation on the private sector and is likely to exceed the 50-percent ceiling in future years. The Army also faces increasing management challenges in managing its depot maintenance work within the 50-percent ceiling, but problems with the reliability of the data and the lack of an effective review of the future-years data and process by the Army Audit Agency concealed the impact of the results. Admittedly, projecting the future public-private sector mix is much more difficult and much less precise than quantifying the results of what has already occurred. Depot plans and strategies are still evolving with uncertain impacts on depot workloads. Similarly, repair plans for new and upgraded systems and other logistics programs and initiatives impacting the amount and location of depot maintenance services are not fully known. Yet future–year projections must use the best information available to make the most reasonable estimates. Such information should include the latest budget estimates with reasonable adjustments made as needed. Although DOD and service guidance for both the prior and future-years reports has been improved over the years, we continue to identify errors, omissions, and inconsistencies in several reporting categories. These were caused in large part by insufficient direction and clarification in the reporting guidance and inadequate management attention. These problem areas include inactivation activities, contractor logistics support, depot maintenance at non-depot locations, government-furnished material, contract general and administrative expenses, incorporation of future repair costs for new systems, and adjustments for expected execution of programmed workload. In addition, record-keeping weaknesses hinder audit and management oversight efforts and do not provide a sound historical record for facilitating future data collection and reporting efforts. With improved management oversight and direction and the implementation of the required corrective actions, the 50-50 report could become a more useful management tool for DOD and the Congress in managing the Department’s depot maintenance program to attain future compliance with the 50-50 requirement. To improve 50-50 data collection, validation, and reporting processes for prior-years and future years data and thus the reliability and reasonableness of the reported data, and to improve management direction and oversight, we recommend that the Secretary of Defense require the Secretary of the Army (1) identify depot maintenance requirements associated with the recapitalization program, (2) require that the Army Audit Agency review both prior-years and future-years 50-50 data, (3) communicate the reporting requirements to all organizational levels responsible for reporting data, and (4) finalize and issue guidance concerning the reporting of depot maintenance at non-depot locations; the Secretary of the Navy (1) review the management priority accorded the 50-50 reporting process throughout the command structure, (2) implement improved management controls and oversight of the processes used by the individual reporting commands to collect, verify, and report 50-50 data, (3) finalize procedures for accurately identifying and reporting depot maintenance costs at regional and other non-depot locations, (4) prior to issuing the data call for the 50-50 reports due in fiscal year 2002, hold a planning meeting of key officials representing all reporting commands to discuss and agree upon 50-50 data collection processes and guidance, and (5) direct the Naval Audit Service to review 50-50 processes and data to validate the data collection processes and results for both the prior-years and future-years reports; the Assistant Deputy Under Secretary of Defense for Maintenance Policy, Programs and Resources expand and clarify its guidance to (1) specify whether contract general and administrative expenses incurred by government employees and similar types of costs should be counted as part of the public or the private sector, and (2) allow for revisions to budgetary estimates to better reflect known and anticipated changes in workloads, workforce, priorities, and performance execution rates in order to achieve more reasonable projections of depot requirements where historical data indicates that budget data are unrealistic; the Assistant Deputy Under Secretary of Defense for Maintenance Policy, Programs and Resources, in conjunction with the secretaries of the military departments, improve and clarify 50-50 reporting guidance in problem areas noted in this report, including inactivation activities, contractor logistics support, incorporation of future repair costs for new and upgraded systems in 50-50 projections, depot-level maintenance performed at non-depot locations, and inclusion of government-furnished material in contract repair costs; and the secretaries of the military departments reemphasize and expand procedures for maintaining adequate records to document data collection processes, data sources, and estimating methodologies in order to facilitate management oversight and audits, as well as provide an historical record that can be readily used by staff newly assigned to the 50-50 process to annually replicate sound, efficient, and consistent data collection efforts. DOD generally concurred with our recommendations. However, it did not concur with certain parts of two recommendations. The Department’s specific comments and our evaluation of them are discussed below. Service officials also offered some technical comments that we incorporated in this report where appropriate. DOD’s comments are included as appendix I to this report. The Department did not concur with two parts of our recommendation addressing Navy 50-50 issues. First, regarding our recommendation that the Navy hold a 50-50 planning meeting, the Navy response said that its new handbook, frequent contacts between the 50-50 manager and key reporting officials, and meetings held internally by reporting organizations accomplish the same purpose. While we agree that these efforts are important and should be continued, an initial planning meeting of key representatives of reporting commands has proven useful in both the Army and the Air Force. For example, service officials have said that these meetings surfaced reporting issues up-front and helped ensure more consistency in reporting processes and results. Thus, we continue to believe the Navy should hold an initial planning meeting. Second, the Navy disagreed with our recommendation that the Naval Audit Service review 50-50 processes and data. The Department’s response noted that the Navy believes sufficient management attention has been given to this process and is confident in the integrity of its data. However, the comments also stated that the Naval Audit Service will be used should the Navy determine that an audit service review is necessary. Because of the value added by audit services in the Army and the Air Force (and by the Navy last year), we continue to believe that the Naval Audit Service should be used to review 50-50 processes and data. Finally, the Department did not concur with one part of our recommendation regarding clarification of Office of Secretary of Defense 50-50 guidance to the services. Specifically, the Department’s response noted that the counting of contract general and administrative expenses incurred by government employees is unique to the Air Force and additional departmental guidance is not necessary. We continue to believe that this issue should be clarified in the OSD guidance because a substantial amount is involved and the Air Force continues to make this adjustment every year. To determine whether the military departments met the 50-50 requirement in the prior-years report, we analyzed each service’s procedures and internal management controls for collecting and reporting depot maintenance information for purposes of responding to the section 2466 requirement. We reviewed supporting details (summary records, accounting reports, budget submissions, and contract documents) at departmental headquarters, major commands, and selected maintenance activities. We compared processes to determine consistency and compliance with legislative provisions, OSD guidance, and military service instructions. We selected certain programs and maintenance activities for more detailed review. We particularly examined reporting categories that DOD personnel and we had identified as problem areas in current and past reviews; these areas included interserviced workloads, contractor logistics support, warranties, software maintenance, and depot maintenance at non-depot locations. We evaluated processes for collecting and aggregating data to ensure accurate and complete reporting and to identify errors, omissions, and inconsistencies. We coordinated our work, shared information, and obtained results of the Army and Air Force service audit agencies’ data validation efforts. To determine whether the future-years projections were based on accurate data, valid assumptions, and existing plans, and represented reasonable estimates, we followed the same general approach and methodology used to review the report on the preceding years discussed above. Although the future-years report is a budget-based projection of expenditures, the definitions, guidance, organization, and processes used to report future data are much the same as for the prior-years report of actual obligations. We discussed with DOD officials the main differences between the two processes and the manner in which the data were derived from budgets and planning requirements and key assumptions made in the outyear data. For reviews of both 50-50 reports, we performed certain checks and tests, including variance analyses, to judge the consistency of this information with data from prior years and with the future-years budgeting and programming data used in DOD’s budget submissions and reports to the Congress. For example, we compared each service’s 50-50 data reported in February and April 2001 for the period 1999 through 2004 with data reported for these same years in the 50-50 reports submitted in 2000. We found repeated and significant changes, even though the estimates were prepared only about one year apart. This analysis helped us identify large transcription errors and unreported costs that the Army had made which resulted in the data reported to Congress erroneously indicating an increase in the percentage of depot maintenance work assigned to the public sector. Instead, our corrected data shows the Army allocation percentages staying rather constant during this period and closer to the 50-percent ceiling. This analysis also revealed a greater increase in the Navy’s shift to more private sector workload than had been projected last year. Variance analysis showed that congressional and DOD decisionmakers were given quite a different view of the public-private sector workload mix than that presented just last year. During this review we also used to a great extent our prior and ongoing audits in such areas as sustainment planning, depot policies, financial systems and controls, and DOD pilots and initiatives for increasing contractor involvement in maintenance. Several factors concerning data validity and completeness were considered in our methodology and approach to reviewing the prior and future years’ reports. One key factor is the continuing deficiencies GAO has noted in DOD’s financial systems and reports that preclude a clean opinion on its financial statements and which results in limited accuracy of budget and cost information. Another factor is that documenting depot maintenance workload allocations between the public and private sectors is becoming more complicated by the consolidation of maintenance activities and performance of depot-level maintenance at field locations. This (1) makes it more difficult to identify work that meets the statutory definition of depot maintenance; (2) complicates workload reporting; and (3) results in underreporting of depot maintenance for both the public and private sectors. In addition, many contracts, especially the newer performance-based contracts, do not separately identify maintenance activities or account separately for their costs, which can result in under- and over-reporting of depot maintenance work performed in the private sector. To review DOD efforts to improve the accuracy and completeness of reports, we discussed with officials managing and coordinating the reporting process their efforts to address known problem areas and respond to recommendations by the audit agencies and us. We compared this year’s sets of instructions with last year’s to identify changes and additions. We reviewed efforts to identify reporting sources and to distribute guidance and taskings. We asked primary data collectors their opinions on how well efforts were managed and data verified. We asked them to identify “pain points” and ideas they had to improve reporting. We reviewed prior recommendations and service audit agency findings to determine whether known problem areas were being addressed and resolved. We interviewed officials, examined documents, and obtained data at OSD, Army, Navy, and Air Force Headquarters in Washington, D.C.; Army Materiel Command in Alexandria, Virginia; Naval Sea Systems Command in Arlington, Virginia; Naval Air Systems Command in Patuxent River, Maryland; Air Force Materiel Command in Dayton, Ohio; Army Audit Agency in Washington, D.C.; Air Force Audit Agency in Dayton, Ohio; and several operating activities under the military departments’ materiel commands. We conducted our review from February to July 2001, in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, the Commandant of the Marine Corps, the Director of Office of Management and Budget, and interested congressional committees. We will make copies available to others upon request and will post the report to GAO’s homepage at www.gao.gov. Key contributors to this report are listed in appendix II. In addition to the above contacts, John Brosnan, Raymond Cooksey, Bruce Fairbairn, Johnetta Gatlin-Brown, Jane Hunt, Steve Hunter, Glenn Knoepfle, Ron Leporati, Fred Naas, Andy Marek, and Bobby Worrell made contributions to this report. Defense Maintenance: Sustaining Readiness Support Capabilities Requires a Comprehensive Plan (GAO-01-533T, Mar. 23, 2001) Depot Maintenance: Key Financial Issues for Consolidations at Pearl Harbor and Elsewhere Are Still Unresolved (GAO-01-19, Jan. 22, 2001). Depot Maintenance: Action Needed to Avoid Exceeding Ceiling on Contract Workloads (GAO/NSIAD-00-193, Aug. 24, 2000). Air Force Depot Maintenance: Budgeting Difficulties and Operational Inefficiencies (GAO/AIMD/NSIAD-00-185, Aug. 15, 2000.) Depot Maintenance: Air Force Waiver to 10 U.S.C. 2466 (GAO/NSIAD-00-152R, May 22, 2000). Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Ceiling (GAO/T-NSIAD-00-112, Mar. 3, 2000). Depot Maintenance: Future Year Estimates of Public and Private Workloads Are Likely to Change (GAO/NSIAD-00-69, Mar. 1, 2000). Depot Maintenance: Army Report Provides Incomplete Assessment of Depot-type Capabilities (GAO/NSIAD-00-20, Oct. 15, 1999). Depot Maintenance: Status of the Navy’s Pearl Harbor Project (GAO/NSIAD-99-199, Sep. 10, 1999). Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain (GAO/NSIAD-99-154, July 13, 1999). Navy Ship Maintenance: Allocation of Ship Maintenance Work in the Norfolk, Virginia, Area (GAO/NSIAD-99-54, Feb. 24, 1999). Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved (GAO/NSIAD-98-175, July 23, 1998). Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector (GAO/NSIAD-98-8, Mar. 31, 1998). Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations (GAO/NSIAD-98-41, Jan. 20, 1998). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Navy Regional Maintenance: Substantial Opportunities Exist to Build on Infrastructure Streamlining Progress (GAO/NSIAD-98-4, Nov. 13, 1997). Defense Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-112, May 1, 1997) and (GAO/T-NSIAD-97-111, Mar. 18, 1997). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-148, Apr. 17, 1996) and (GAO/T-NSIAD-96-146, Apr. 16, 1996). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994).
Federal law states that not more than 50 percent of annual depot maintenance funding can be used for work by private sector contractors. In an earlier report, GAO could not determine whether the Department of Defense (DOD) had complied with the 50-percent limitation. More recent GAO testimony highlighted continuing and pervasive weaknesses in DOD's financial management systems, operations, and controls that impair its ability to accurately accumulate and report reliable budget execution and cost data. This report found that the military had mixed results complying with the 50-50 requirement for private sector workloads in fiscal years 1999 and 2000. The projections of the Army, Air Force, and Navy in DOD's report for fiscal years 2001 through 2005 are neither accurate nor reasonable estimates of the future allocations of public and private sector workloads. The services placed much less emphasis on the future-years data and reports. The reported projections use incorrect data and questionable assumptions and are inconsistent with existing budgets and management plans. DOD's report should be viewed with caution because it does not provide the best data available to DOD decisionmakers and congressional overseers, and the reported data are misleading about how future workloads are likely to be allocated between the public and private sectors. Although DOD has greatly improved the 50-50 reporting guidance and the implementation of the reporting process, further improvement could be made.
The federal government relies heavily on contractors to provide a range of goods and services. In fiscal year 2007, about 160,000 contractors provided support to federal agencies. A large portion of these contractors was concentrated in five agencies: DOD, DHS, DOE, NASA, and GSA. Among these five agencies, DOD accounts for 72 percent of all contract obligations across about 77,000 contractors in fiscal year 2007 (see table 1). These five agencies often rely on the same contractors. Table 2 shows the number and percentage of contractors DHS, NASA, DOE, and GSA had in common with DOD in fiscal year 2007. The FAR requires agencies to consider past performance information as an evaluation factor in certain negotiated competitive procurements—along with other evaluation factors such as price, management capability, and technical excellence. Contractor past performance information may include the contractor’s record of conforming to contract requirements and to standards of good workmanship; record of forecasting and controlling costs; adherence to contract schedules; and history of reasonable and cooperative behavior and commitment to customer satisfaction. Although the FAR requires officials in selecting contractors to consider past performance as an evaluation factor in certain negotiated procurements, agencies have broad discretion in deciding its importance relative to other factors in the evaluation scheme. Agencies determine which of the contractor’s past contracts are similar to the contract to be awarded in terms of size, scope, complexity, or contract type and the relative importance of past performance. For procurements with clearly defined requirements and minimal risk of unsuccessful contract performance, cost or price may play a more important role than past performance in selecting contractors. For procurements with less clearly defined requirements and a higher risk of unsuccessful contract performance, it may be in the government’s best interest to consider past performance, technical capability, and other factors as more important than cost or price. The FAR requires that solicitations disclose the evaluation factors that will be used in selecting a contractor and their relative importance. In evaluating past performance information, agencies must consider, among other things, the 1) currency and relevancy, 2) source and context, and 3) general trends in the contractor’s performance. The solicitation must also describe how offerors with no performance history will be evaluated. Once a contract is awarded, the government should monitor a contractor’s performance throughout the performance period. Surveillance includes oversight of a contractor’s work to provide assurance that the contractor is providing timely and quality goods or services and to help mitigate any contractor performance problems. An agency’s monitoring of a contractor’s performance may serve as a basis for past performance evaluations. The FAR requires agencies to prepare an evaluation of contractor performance for each contract that exceeds the simplified acquisition threshold at the time the work is completed and gives agencies discretion to include interim evaluations for contracts with a performance period exceeding one year. The DOD has generally higher thresholds based on business sectors. A number of systems across the government are used to capture contractor performance information, which is eventually passed on to PPIRS. DOD maintains three systems for its military departments and agencies—Architect-Engineer Contract Administration Support System (ACASS), Construction Contractor Appraisal Support System (CCASS), and Contractor Performance Assessment Reporting System (CPARS). NASA has its own system, the Past Performance Database (PPDB). DHS and DOE are transitioning to using DOD’s CPARS. Other civilian departments use the Contractor Performance System (CPS) managed by the National Institutes of Health. Effective July 1, 2002, all federal contractor past performance information currently captured through these disparate systems was to be centrally available for use by all federal agency contracting officials through PPIRS—a Web-enabled, governmentwide application for consolidating federal contractor performance information. Since its implementation, concerns have been raised about the completeness of the information in PPIRS. In February 2008, a DOD Inspector General report noted that the information in CPARS, which feeds information into PPIRS, was incomplete and questioned whether or not acquisition officials had access to all the information they needed to make business decisions. Specifically, in reviewing performance assessment reports in CPARS, the Inspector General reported that for DOD contracts valued at more than $5 million, 82 percent did not contain detailed narratives sufficient to establish that ratings were credible and justifiable; 68 percent had performance reports that were overdue; and 39 percent were registered more than a year late. In addition, the report identified material internal control weaknesses in the Air Force, Army, and Navy procedures for documenting and reporting contractor performance information. Agencies considered past performance information in evaluating contractors for the contract solicitations we reviewed, but many of the officials we spoke with noted that past performance rarely, if ever, was the deciding factor in their contract award decisions. Their reluctance to base award decisions on past performance was due, in part, to their skepticism about the comprehensiveness and reliability of past performance information and difficulty assessing its relevance to specific acquisitions. For the 62 contract solicitations we reviewed, the ranking of past performance as an evaluation factor relative to other non-cost factors varied. The company’s technical approach was the non-cost factor considered most important for most solicitations. Past performance as an evaluation factor was ranked first in order of importance in about 38 percent of solicitations (appendix I provides more details on the methodology for selecting and reviewing contract solicitations). Contracting officials who viewed past performance as an important evaluation factor noted that basing contract award decisions, in part, on past performance encourages companies to achieve better acquisition outcomes over the long term. For example, according to officials at one Air Force location, an incumbent contractor was not awarded a follow-on contract worth over $1 billion primarily because of poor performance on the prior contract. As a result, the contractor implemented several management and procedural changes to improve its performance on future contracts. Despite the fact that past performance was an evaluation factor in all the solicitations we reviewed, over 60 percent of the contracting officers we talked with stated that past performance is rarely or never a deciding factor in selecting a contractor. Many contracting officers stated they preferred to rely on other more objective factors such as technical approach or price. Officials cited several reasons for their reluctance to rely more on past performance in making award decisions including difficulty obtaining objective and candid past performance information. For example, over half of the contracting managers we met with noted that officials who are assessing a contractor’s performance have difficulty separating problems caused by the contractor from those caused by the government, such as changing or poorly defined government requirements. Fear of damaging contractor relations may also influence assessments of contractor performance, particularly in areas where there are a limited number of contractors that can provide a particular good or service. Some contracting officials told us there may also be a tendency to “water down” assessments if they perceive a contractor may contest a negative rating. Contracting officials also cited other challenges for not relying more on past performance information including 1) difficulty assessing relevance to the specific acquisition or offerors with no relevant past performance information, 2) lack of documented examples of past performance, and 3) lack of adequate time to identify, obtain, and analyze past performance information. Contracting officials often rely on multiple sources of past performance information. Most officials told us they found information from the prospective contractor’s prior government or industry customer references—gathered through interviews or questionnaires—as the most useful source of past performance information. Moreover, several contracting officials noted that they use questionnaires to obtain past performance information on major subcontractors. Officials noted, however, that questionnaires are time-consuming and the performance information collected through them is not shared governmentwide. Other sources of past performance information include informal contacts such as from other contracting officers who have dealt with the contractor in the past. Most contracting officials we spoke with also used PPIRS, but cited the absence of information in PPIRS as one reason for typically relying on other sources along with challenges in ascertaining information that was relevant to the specific acquisitions. Several contracting officials stated a governmentwide system like PPIRS, if populated, could reduce the time and effort to collect past performance information for use in selecting contractors. Regardless of the source used, contracting officials agreed that for past performance information to be meaningful in contract award decisions, it must be documented, relevant, and reliable. Our review of PPIRS data for fiscal years 2006 and 2007 found relatively little past performance information available for sharing and potential use in contract award decisions. One reason is that agencies are not documenting contractor performance information that feeds into PPIRS to include, in some cases, contract actions involving task or delivery orders placed against GSA’s MAS. Other information that could provide key insights into a contractor’s performance, such as information on contract terminations for default and a prime contractor’s management of subcontractors, was also not systematically documented. Contracting managers also lack tools and metrics to monitor the completeness of past performance data in the systems agencies use to record past performance information. Further, the lack of standardized evaluation factors and rating scales in the systems that collect past performance information has limited the system’s usefulness in providing an aggregate level picture of how contractors are performing. Finally, lack of central oversight of PPIRS has undermined efforts to capture adequate past performance information. The FAR requires agencies to prepare an evaluation of contractor performance for each contract that exceeds the simplified acquisition threshold ($100,000 in most cases) when the contract work is completed. While the FAR definition of a contract can be read to include orders placed against GSA’s Multiple Award Schedule (MAS), the FAR does not specifically state whether this requirement applies to contracts or task or delivery order contracts awarded by another agency. While DOD and many agencies we reviewed have issued supplemental guidance reiterating the FAR requirement to evaluate and document contractor performance—information that ultimately should be fed into PPIRS—the agencies generally did not comply with the requirement. We estimated that the number of contracts that required a performance assessment in fiscal year 2007 for agencies we reviewed would have totaled about 23,000. For the same period, we found about 7,000 assessments in PPIRS—about 31 percent of those contracts requiring an assessment (see table 3). About 75 percent of all past performance reports in PPIRS were from DOD, with the Air Force accounting for the highest percent of completed assessments; however, there were relatively few for some military services—a finding consistent with the DOD IG’s February 2008 report. For the civilian agencies we reviewed, there were relatively few performance reports in PPIRS compared to the number we estimated. For example, for fiscal year 2007, an estimated 13 percent of DHS contracts that would potentially require a performance assessment were documented in PPIRS. For specific types of contract actions, such as task and delivery orders placed against GSA’s MAS, we found little contractor performance information in PPIRS. Between fiscal years 1998 and 2008, purchases made against MAS have grown from over $7 billion to $37 billion. Similarly, the number of MAS contracts has increased from 5,200 in the mid-1990s to 18,000 in fiscal year 2008. Despite this significant growth, the number of performance reports in PPIRS for orders placed against MAS contracts is minimal. For example, about 5 percent of the DHS orders and none of NASA’s were assessed in fiscal year 2007. Contracting officials we spoke with confirmed that these assessments were generally not being done; some told us that they believed GSA was collecting this information. According to GSA officials, however, agencies are responsible for documenting and reporting MAS contractor performance, and GSA does not generally request feedback on performance for MAS contractors. Without this information, GSA is in no position to know how a contractor is performing when deciding whether or not to continue doing business with that contractor. Currently, there is no governmentwide requirement for agencies to document in PPIRS when a contract has been terminated because the contractor defaulted on the terms of the contract. Consequently, contracting officers may not have access to all information on a contractor’s past performance that could factor into a contract award decision. The recent awarding of contracts to defaulted contractors highlights the need for information on contract terminations when making contracting decisions. For example, a $280-million Army munitions contract was awarded to a contractor that had previously been terminated for default on several different contracts. The contracting officer told us that this information, if available, would have factored into the contract award decision. Subsequently, this same contractor defaulted under that contract. Similarly, an October 2008 report issued by the Office of the Special Inspector General for Iraq Reconstruction documented that at least eight contractors that had one or more of their projects terminated for default received new contracts and purchase orders. As part of this audit, the office examined whether the agencies had evaluated the contractors’ prior performance before awarding contracts and whether they had considered suspending or debarring the poor performing contractors. Although the report found that the awards to defaulted contractors were within the authority provided by the FAR, it raised questions about the degree to which the contractors’ prior performance was considered. In June 2008, the FAR Council opened a case to address termination for default reporting. In addition, DOD issued policy in July 2008 on the need for departmentwide centralized knowledge of all contracts that have been terminated regardless of dollar amount. At the subcontractor level, apart from evaluating a prime contractor’s management of its subcontractors, historically, the federal government has had limited visibility into subcontractor performance despite the increased use in subcontractors. In January 2008, we reported that total subcontract awards from DOD contracts had increased by 27 percent over a 4-year period—from $86.5 billion in fiscal year 2002 to $109.5 billion in fiscal year 2006. As we reported, federal contractors must manage contract performance, including planning and administering subcontracts as necessary, to ensure the lowest overall cost and minimize technical risk to the government. The FAR provides that the agency’s past performance evaluation should take into account past performance information regarding a prospective contractor’s subcontractors that will perform major or critical aspects of a requirement when such information is relevant to an acquisition. Agency contracting officials informed us that they do not assess the performance of these subcontractors. Rather, if they collect any information, it is in their assessments of the prime contractor’s subcontract management. However, not all collection systems used by agencies allow for systematic capturing of subcontract management information, if it was applicable in a procurement. DOD’s CPARS system has a separate rating factor for subcontract management for systems contracts whereas systems used by NASA and other civilian agencies do not have a separate factor. DOD guidance states assessments must not be done on subcontractors, but CPARS allows the assessing official to address the prime contractor’s ability to manage and coordinate subcontractor efforts. Beyond this information on subcontractors, no additional information is routinely collected on subcontractors. In addition, the FAR was recently revised to explain that information on contractor ethics can be considered past performance information. The FAR now states that a contractor’s history of reasonable and cooperative behavior and commitment to customer satisfaction may be considered part of a contractor’s past performance. This type of data is not currently being systematically captured and documented for use in contract award decisions. Several contracting officials acknowledged that documenting contractor performance was generally not a priority, and less than half of the contracting managers we talked with tracked performance assessment completeness. Some agency officials we spoke with said that a lack of readily accessible system tools and metrics on completeness has made it difficult to manage the assessment process. CPARS and CPS—assessment reporting systems used by DOD and DHS—do not have readily accessible system tools and metrics on completeness for managers to track compliance. According to officials who manage CPARS, a team is developing requirements for system tools and metrics but has been challenged to develop useful measures because of a lack of complete and reliable contract information from FPDS. OFPP officials similarly acknowledged there was a lack of tools and metrics for agency contracting officials to monitor and manage the process of documenting contractor performance. For example, managers currently do not have the ability to readily identify contracts that require an assessment, how many are due and past due, and who is responsible for completing assessments. According to these officials, holding managers accountable for outcomes without adequate tools to manage the assessment process would be difficult. However, a few contracting managers we spoke with placed a high priority on documenting contractor performance, noting that doing so tended to improve communication with contractors and encourage good performance. One Air Force Commander issued guidance reiterating that CPARS is a key component in selecting contractors; that Commander personally oversees the performance reporting system, requiring a meeting with responsible officials when a CPARS report is overdue. DHS officials recognized that more emphasis is needed on documenting performance assessments and told us they have included a past performance review as part of their chief procurement officer oversight program for fiscal year 2009. Other indicators that some management officials placed a high priority on documenting performance include the following: Assigning past performance focal points—some activities assigned focal points, individuals with specific responsibilities that included providing training and oversight. At two Air Force locations, focal points also reviewed performance narratives for quality. Designating assessing officials—some activities designated managers as the official assessor of contractor performance rather than contracting officers or program office officials. Who to assign accountability to is another challenge. OFPP generally views the completion of contractor performance assessments as a contracting officer function. However, many contracting officials we talked with stated they often do not have the required information to complete an assessment and have to rely on program officials to provide the information. Some contracting offices delegated responsibility for completing assessments to the program office but acknowledged program office officials have little incentive to complete assessments because they often did not see the value in them. We previously reported in 2005 that conducting contactor surveillance at DOD, which includes documenting contractor performance, was not a high priority and that accountability for performing contractor surveillance was lacking. Differing number and type of rating factors and rating scales agencies use to document contractor performance limit the usefulness of the information in PPIRS. NASA’s PPDB system has four rating factors, and the CPS database, which is used by other civilian agencies, has five rating factors. In contrast, DOD’s CPARS system has a total of 16 rating factors. Each system also uses a different rating scale. Table 4 highlights these differences. Officials from GSA’s Integrated Acquisition Environment, which has oversight of governmentwide acquisition systems, acknowledged the utility of PPIRS is currently limited by the differences in rating factors and scales. Because the ratings are brought into PPIRS as-is, aggregate ratings for contractors cannot be developed—the data are too disparate. As a result, contracting officials making contract award decisions may have to open and read through many ratings to piece together an overall picture of a contractor’s performance. Ultimately, the lack of this information hinders the federal government’s ability to readily assess a contractor’s performance at an aggregate level or how overall performance is trending over time. No one agency oversees, monitors, manages, or funds PPIRS to ensure agency data fed into the system is adequate, complete, and useful for sharing governmentwide. While GSA is responsible for overseeing, and consolidating governmentwide acquisition related systems, which include PPIRS, OFPP is responsible for overall policy concerning past performance, and DOD funds and manages the technical support of the system. In May 2000, OFPP published discretionary guidance entitled “Best Practices for Collecting and Using Current and Past Performance Information.” Consistent with the FAR, this guidance stated that agencies are required to assess contractor performance and emphasized the need for an automated means to document and share this information. Subsequently, OFPP issued a draft contractor performance guide in 2006 designed to help agencies know their role in addressing and using contractor performance information. However, the guide was not intended to, nor does it, establish governmentwide roles and responsibilities for managing and overseeing PPIRS data. Since 2005, several efforts have been initiated to improve PPIRS and provide pertinent and timely performance information, but little progress has been made. Several broad goals for system improvement, established in 2005 by an OFPP interagency group, have yet to be met. Likewise, a short-term goal of revising the FAR to mandate the use of PPIRS by all government agencies has yet to be achieved. OFPP acknowledges that PPIRS falls short of its goal to provide useful information to contracting officials making contracting decisions. When PPIRS was established in 2002, OFPP officials envisioned it would simplify the task of collecting past performance information by eliminating redundancies among the various systems. In 2005, the Chief Acquisition Officers Council, through an OFPP interagency work group, established several broad goals for documenting, sharing, and using past performance information, including the following: Standardize different contracting ratings used by various agencies. Provide more meaningful past performance information, including terminations for default. Develop a centralized questionnaire system for sharing governmentwide. Possibly eliminate multiple systems that feed performance information in PPIRS. However, little progress has been made in addressing these goals. According to OFPP officials, funding needs to be dedicated to address these goals and realize long-term improvements to the current past performance system. GSA officials who oversee acquisition related systems, to include PPIRS, told us that as of February 27, 2009, efforts remain unfunded and no further action had been taken to make needed improvements. The first step in securing funding, according to OFPP and GSA officials, is mandating the use of PPIRS. However, proposed changes to the FAR that would clarify past performance documentation requirements and require the use of PPIRS have been stalled. The proposed rule provides clearer instruction to contracting officers by delineating the requirement to document contractor performance for orders that exceed the simplified acquisition threshold, including those placed against GSA MAS contracts, or for orders against contracts awarded by another agency. In proposing FAR changes, OFPP focused, in part, on accountability by requiring agencies to identify individuals responsible for preparing contractor performance assessments. While the comment period for the proposed changes closed in June 2008, the changes have not been finalized. An OFPP policy official stated that the final rule is expected to be published by June 2009. With the federal government relying on many of the same contractors to provide goods and services across agencies, the need to share information on contractors’ past performance in making contract award decisions is critical. While the need for a centralized repository of reliable performance information on federal contractors was identified in 2002 when OFPP implemented PPIRS, we identified several underlying problems that limit the usefulness of information in PPIRS for governmentwide sharing. These problems include the lack of accountability or incentive at agencies to document assessments in the system, lack of standard evaluation factors and rating scales across agencies, and a lack of central oversight to ensure the adequacy of information fed into the system. Any efforts to improve sharing and use of contractor performance information must, at a minimum, address these deficiencies. Until then, PPIRS will likely remain an inadequate information source for contracting officers. More importantly, the government cannot be assured that it has adequate performance information needed to make sound contract award decisions and investments. To facilitate governmentwide sharing and use of past performance information, we recommend that the Administrator of OFPP, in conjunction with agency chief acquisition officers, take the following actions: Standardize evaluation factors and rating scales governmentwide for documenting contractor performance. Establish policy for documenting performance-related information that is currently not captured systematically across agencies, such as contract terminations for default and a prime contractor’s management of its subcontractors. Specify that agencies are to establish procedures and management controls, to include accountability, for documenting past performance in PPIRS. Define governmentwide roles and responsibilities for managing and overseeing PPIRS data. Develop system tools and metrics for agencies to use in monitoring and managing the documenting of contractor performance, such as contracts requiring an evaluation and information on delinquent reports. Take appropriate action to finalize proposed changes to the FAR that clarify responsibilities and performance documentation requirements for contract actions that involve orders placed against GSA’s Multiple Award Schedule. To improve management and accountability for timely documenting of contractor past performance information at the agency level, we recommend that the departments of Defense, Energy, Homeland Security, and NASA establish management controls and appropriate management review of past performance evaluations as required and in line with any OFPP policy changes. We provided a draft of this report to OFPP and the departments of Defense, Energy, Homeland Security, GSA, and NASA. We received e-mail comments from OFPP, in which OFPP concurred with the recommendations. We received written comments from the other five agencies, which are included as appendixes III through VII. In their written comments, the agencies agreed with the recommendation on improving management controls and most agencies outlined specific actions planned or taken to address the recommendation. In written comments to the draft of this report, DHS did not agree with the figures contained in table 3 of the report regarding estimated contracts requiring an assessment and number of assessments in PPIRS for selected agencies. DHS stated that our numbers significantly understate the percentage of DHS contracts for which assessments were performed and are possibly inaccurate or misleading in how DHS compared to other agencies. DHS presented its own data and requested that we revise ours. We applied the same methodology across all civilian agencies, including DHS, and found no basis for using the numbers or methodology provided by DHS. For example, while DHS indicates we should not include delivery orders, as we state in the note under table 3, our estimates did not include individual orders issued by agencies that exceed the threshold. Therefore, we stand by our methodology and data, which as we stated in the report, presents a conservative estimate of the contracts that required an assessment. Also, we assessed the reliability of data we used and found it to be sufficiently reliable for the purposes of our analyses. As a result, we are not revising the figures in table 3. As noted in our report, improvements are needed across agencies for the management and accountability of timely documenting contractor past performance information. In its response, DHS agreed that significant strides need to be made in this area. In written comments to the draft of this report, GSA stated that our recommendation should be changed to show that the FAR Council in lieu of agency chief acquisition officers would be involved in developing and disseminating governmentwide acquisition policy through the FAR. According to an OFPP policy official, while the FAR Council would be involved in evaluating policy and making changes to the FAR, OFPP is responsible for overall policy concerning past performance and can make policy changes without involving the FAR Council. In line with our recommendations, this would include standards for evaluating past performance and policies for collecting and maintaining the information. As we state in the report, the Chief Acquisition Officers Council, through an OFPP interagency work group, has already established several broad goals for documenting, sharing, and using past performance information. Our recommendations to OFPP, in coordination with this Council, are in part aimed at actions necessary to address these goals. These recommendations could be implemented through an OFPP policy memorandum and could result in changes to the FAR, which we recognize would need to be coordinated through the FAR Council as appropriate. As a result, we are not making changes to the recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies of this report to interested congressional committees; the Director of the Office of Management and Budget, the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Secretary of the Department of Homeland Security; the Secretary of the Department of Energy; the Secretary of the National Aeronautics and Space Administration; and the Administrator of the General Services Administration. In addition, we will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report or need additional information, please contact me at (202) 512-4146 or LasowskiA@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix VIII for a list of key contributors to this report. To assess agencies’ use of information on contractors’ past performance in awarding contracts, we reviewed and analyzed the Federal Acquisition Regulation (FAR) and Office of Federal Procurement Policy (OFPP) guidance on use of past performance. We also reviewed source selection guidance for the Department of Defense (DOD), Department of Energy (DOE), Department of Homeland Security (DHS), National Aeronautics and Space Administration (NASA), and the General Services Administration (GSA)—agencies accounting for a large percentage of federal contractors. To obtain agency contracting officials’ views on using past performance, we used FPDS-NG data to select 11 buying offices across the agencies to provide a cross-section of buying activities. At these locations, we interviewed 121 contracting officials including supervisory contract personnel to include division/branch contracting managers, contracting officers, and contract specialists to discuss 1) how past performance factored into the contract award decision, 2) sources upon which they rely for the information, 3) completing contractor performance assessments, and 4) challenge in using and sharing past performance information. To identify the importance of past performance relative to other non-cost factors in specific solicitations, we used FPDS-NG data from fiscal year 2007 and the first eight months of fiscal year 2008, to identify 62 competitively awarded contracts—49 definitive contracts and 13 orders placed against indefinite delivery vehicle contracts. We selected these contracts to represent a range of contracts across different buying activities and—though not generalized to all contract actions within these agencies— represented a range of products and services, types of contracts, and dollar values as shown in appendix II. We obtained contract documents to verify the fields used in FPDS-NG to select the contracts, including type of contract and product service code, and found the data reliable enough for the purpose of selecting the contracts. For these contracts, we obtained source selection documents including sections M of the request for proposals, which described the evaluation factors for award, and the source selection decision document that described how past performance was evaluated for each offeror. We reviewed the evaluation factors for each solicitation to identify how past performance ranked in order of importance relative to other non-cost factors in the evaluation scheme and summarized the results. To assess the extent to which selected agencies in our review complied with requirements for documenting contractor performance, we analyzed FPDS-NG and PPIRS data and used information provided by the DOD CPARS program office. In estimating the number of contracts requiring an assessment for fiscal years 2006 and 2007 for civilian agencies in our review, we aggregated contract actions in FPDS-NG for each year to identify the number of contracts that exceeded the reporting thresholds of $550,000 for construction contracts (FAR § 36.201), $30,000 for architect and engineering (FAR § 36.604), and generally $100,000 for most other contracts (FAR § 2.101). We excluded contracts that are exempt from performance assessments under FAR subpart 8.7—acquisitions from non profit agencies employing people who are blind or severely disabled. For indefinite delivery contracts, including GSA’s multiple award schedule, orders were accumulated against the base contract for each agency and counted as one contract if the cumulative orders exceeded the reporting thresholds. This analysis provides a conservative estimate of the number of contracts that require an assessment because it does not include individual orders that may exceed the threshold or contract actions that span fiscal years. For this analysis, we used contract number and dollar obligation fields from FPDS-NG and found them reliable enough for the purpose of this analysis. Because DOD uses different reporting thresholds based on business sectors—information that is not available in FPDS- NG—we obtained compliance reports from the CPARS program office for fiscal years 2006 and 2007, which included estimates of the number of performance assessments that would have been required for DOD components and the number of those contracts with completed assessments. To determine the number of fiscal year 2006 and 2007 contracts with performance assessments for civilian agencies, we obtained and analyzed data from the PPIRS program office on contracts with assessments, including the number of assessments against GSA MAS contracts, as of February 26, 2009. To assess the reliability of data provided, we accessed the PPIRS system and compared the number of contracts with assessments with those provided by the CPARS and PPIRS program offices, and found the data sufficiently reliable for the purpose of our analysis. To assess the usefulness of PPIRS for governmentwide sharing of past performance information, we compared information in each of the three systems used to document contractor performance information including rating factors and rating scales. In addition, we met with agency officials who have responsibilities for managing the various systems—including the Naval Sea Logistics Center Detachment, Portsmouth, which administers CPARS and PPIRS, and officials at NASA who administer the Past Performance Database. To identify challenges that may hinder the systematic governmentwide sharing of past performance information, we interviewed contracting officials from 11 buying offices regarding a number of issues to include 1) roles in the assessment process, 2) challenges in completing assessments, 3) performance information not currently captured that might be useful for selecting contractors, 4) and use of metrics for managing and monitoring compliance with reporting requirements. Finally, we met with OFPP, GSA, and DOD to discuss the extent of oversight of PPIRS data and roles and responsibilities as applicable. To assess efforts under way or planned to improve the sharing of information on contractor performance, we obtained and reviewed memorandums, plans, and other documents produced by OFPP including proposed FAR changes and any proposed past performance guidelines. We met with officials from these offices to discuss challenges already identified in sharing and using past performance information, goals they may have established for improving the system, and status of efforts to address them. Our work was conducted at the following locations: OFPP, Washington D.C.; GSA, Arlington, Va; the Air Force Space and Missile Systems Center, El Segundo, Ca; Hill Air Force Base, Ogden, Utah; the Army Communications and Electronics Command, Fort Monmouth, N.J.; the Army Sustainment Command, Rock Island, Ill.; the Army Contracting Command, Fort Belvoir, Va.; the Naval Air Systems Command, Patuxent River, M.d.; the Naval Sea Systems Command, Washington, D.C.; the Defense Contract Management Agency located in Arlington, Va.; DHS including the Customs and Border Protection, Washington, D.C., and the Transportation Security Administration, Arlington, Va.; NASA including the Goddard Space Flight Center, Greenbelt, M.d. and the Johnson Space Center, Houston, Tex.; DOE including the National Nuclear Security Administration Service Center located in Albuquerque, N.M. We conducted this performance audit from February 2008 to February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Ann Calvaresi Barr, Director; James Fuquay, Assistant Director; Usman Ahmad; Jeffrey Barron; Barry DeWeese; Julia Kennon; Flavio Martinez; Susan Neill; Karen Sloan; Sylvia Schatz; and Bradley Terry made key contributions to this report.
In fiscal year 2007, federal agencies worked with over 160,000 contractors, obligating over $456 billion, to help accomplish federal missions. This reliance on contractors makes it critical that agencies have the information necessary to properly evaluate a contractor's prior history of performance and better inform agencies' contract award decisions. While actions have been taken to improve the sharing of past performance information and its use--including the development of the Past Performance Information Retrieval System (PPIRS)--concerns remain about this information. This report assesses agencies' use of past performance information in awarding contracts; identifies challenges that hinder systematic sharing of past performance information; and describes efforts to improve contractor performance information. In conducting this work, GAO analyzed 62 contract solicitations from fiscal years 2007 and 2008 and met with 121 contracting officials. While the solicitations represent a range of contracts and contractors, GAO's findings cannot be generalized to all federal contracts. Agencies considered past performance information in evaluating contractors for each of the 62 solicitations GAO reviewed. Generally, factors other than past performance, such as technical approach or cost, were the primary factors for contract award decisions. A majority of officials told us their reluctance to rely more on past performance was due, in part, to their skepticism about the reliability of the information and difficulty assessing relevance to specific acquisitions. Contracting officials agreed that for past performance information to be useful for sharing, it must be documented, relevant, and reliable. However, GAO's review of PPIRS data for fiscal years 2006 and 2007 indicates that only a small percentage of contracts had a documented performance assessment; in particular, we found little contractor performance information for orders against the General Services Administration's (GSA) Multiple Award Schedule. Other performance information that could be useful in award decisions, such as contract terminations for default and subcontract management, was not systematically captured across agencies. Some officials noted that a lack of accountability and lack of system tools and metrics made it difficult for managers to ensure timely performance reports. Variations in evaluation and rating factors have also limited the usefulness of past performance information. Finally, a lack of central oversight and management of PPIRS data has hindered efforts to address these and other shortcomings. Several efforts have been initiated to improve PPIRS, but little progress has been made. In 2005, an interagency work group established several broad goals for improving past performance information, including standardizing performance ratings used by various agencies. However, these goals have yet to be met, and no funding has been dedicated for this purpose. In April 2008, changes to federal regulations were proposed that would clarify past performance documentation requirements and require the use of PPIRS. However, as of February 2009, the proposed changes had not been finalized.
BLM, BSEE, and BOEM are directly overseen by the Assistant Secretary for Land and Minerals Management, who is responsible for guiding Interior’s management and use of federal lands and waters and their associated mineral and nonmineral resources. In addition, human capital programs at the bureaus and elsewhere in the department are overseen by Interior’s Assistant Secretary of the Office of Policy, Management and Budget, which is broadly responsible for employee training and development; part of the office’s mission is providing high-quality, innovative, efficient, and effective training. The Office of Policy, Management and Budget comprises multiple offices, including the Office of Human Resources, which has primary responsibility for evaluating the effectiveness of Interior’s personnel management program, and the Office of Strategic Employee and Organization Development, which is responsible for delivering efficient and effective training across the department. In fiscal year 2014, BLM, BSEE, and BOEM employed over 900 key oil and gas staff who oversee onshore and offshore oil and gas activities. Onshore land use planning is handled by BLM’s petroleum engineers, natural resource specialists, geologists, and other scientists. Offshore resource planning is handled by BOEM’s petroleum engineers, geoscientists, and other specialists. Operators that are awarded leases for oil and gas development can then submit to BLM (onshore) or BSEE (offshore) an application for a permit to drill. Petroleum engineers, inspectors, natural resource specialists, geologists, and other scientists review and approve applications for permits to drill. The application for permit to drill contains a detailed set of forms and documents that specify requirements that the operator must follow when drilling. Once operators’ planning for oil and gas operations commence, BLM and BSEE inspectors, petroleum engineers, and natural resource specialists carry out a variety of oil and gas inspections. For example, BLM’s inspectors conduct production inspections, drilling inspections, and environmental compliance inspections. Similarly, BSEE inspectors conduct drilling and production inspections to ensure that operators comply with all regulatory requirements. However, Interior and others have stated that offshore inspections in a marine environment are generally more complex and difficult than onshore inspections and require helicopters or boats to reach inspection sites, making the planning and performance of duties more difficult and hazardous. Further, offshore facilities have large amounts of equipment and personnel in relatively confined spaces, more sophisticated safety systems and requirements, and high production volumes, pressures, and temperatures, as well as more limited access to some equipment and piping, especially in deep water areas that are far from shore. In addition to GAO, Interior’s Inspector General and the Outer Continental Shelf Safety Oversight Board have reported on Interior’s challenges related to hiring and retention of such key oil and gas staff. For example, Interior’s Inspector General concluded in December 2010 that the Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE)—which was replaced by BSEE and BOEM in 2011 and which oversaw offshore oil and gas activities—faced considerable hiring challenges in the Pacific Region because of increased hiring by the oil and gas industry in that area due to the industry’s significant salary advantage over federal service. In addition, the report found that engineers in BOEMRE’s Gulf of Mexico Region had to work extra hours to keep up with increased workloads because of staffing shortages, resulting in their inability to attend training or take annual leave. It stated that continued shortages could lead to significant employee burnout and the possibility of less comprehensive reviews as employees attempted to keep pace with demands. In a second 2010 report, Interior’s Inspector General reported that BLM risked losing its trained inspectors because oil and gas operators commonly recruit BLM inspectors by offering high salaries during successful business periods. In that report, the Inspector General recommended, among other things, that BLM consider developing and implementing a continued service agreement requiring newly certified inspectors to stay with the bureau for a specified period of time. Further, the Outer Continental Shelf Safety Oversight Board reported in 2010 that Interior did not have a formal program to train its inspectors. In terms of training, the Outer Continental Shelf Safety Oversight Board also noted in its 2010 report that almost half of the offshore inspectors it surveyed said they did not receive sufficient training. Further, BOEMRE did not have an inspection certification program that combined classroom and on-the-job experience, as well as a formal technical review or exam. By contrast, the report pointed out that BLM had a certification program that combined classroom instruction, on-the-job experience, and a formal technical review or exam. Among the board’s recommendations were for Interior to implement a bureau-wide certification or accreditation program for inspectors; consider partnering with BLM and its National Training Center to establish an Interior oil and gas inspection certification program, with training modules appropriate to the offshore environment as needed; develop a standardized training program similar to other Interior bureaus to ensure that inspectors are knowledgeable in all pertinent regulations, policies, and procedures; and ensure that annual training keeps inspectors up-to-date on new technology, policies, and procedures. Interior’s Inspector General came to similar conclusions and made similar recommendations in 2010. To address hiring and retention challenges, the federal government has a variety of tools available to use. For example, to address staffing problems caused when nonfederal employers pay significantly higher salaries than what the federal government pays, an agency may request special salary rates from OPM that establish higher minimum rates of basic pay for positions in one or more geographic areas. Agencies may also use incentive payments to recruit and retain employees. Incentive payments can come in the form of recruitment incentives, retention incentives, and relocation incentives. Recruitment incentives can be paid to new employees in certain difficult-to-fill positions; retention incentives can be paid to certain current employees holding high or unique qualifications; and relocation incentives can be paid to certain current employees who must relocate to accept a position in a different geographic area and whose position is difficult to fill. To receive an incentive payment, the employee must agree to complete a specified period of service with the agency. In general, total incentive payments may not exceed 25 percent of the employee’s original annual rate of basic pay multiplied by the number of years of service the employee agrees to complete. Agencies may also repay federally insured student loans in order to recruit or retain highly qualified candidates or employees through the Student Loan Repayment Program. Through this program, agencies may make payments to the loan holder of up to a maximum of $10,000 for an employee in a calendar year and a total of not more than $60,000 for any one employee. Employees receiving this benefit must sign an agreement to remain in the service of the agency for at least 3 years. Federal agencies can use special salary rates, incentive payments, and student loan repayments in combination to increase an employee’s overall compensation. Since 2012, Interior has taken steps to resolve its hiring and retention challenges for key oil and gas staff, but it has not evaluated the effectiveness of its efforts. In addition, Interior has missed opportunities to facilitate collaboration among the bureaus, and as a result, the bureaus have sometimes acted in a fragmented, overlapping, and potentially duplicative fashion to resolve similar hiring and retention challenges. Since 2012, Interior has taken steps to address two underlying factors— lower salaries and a lengthier hiring process compared with the oil and gas industry—that have impeded its ability to hire and retain key oil and gas staff, but it has not evaluated the effectiveness of its efforts. Interior has increased the compensation for certain key oil and gas staff through use of special salary rates, incentive payments, and student loan repayments since fiscal year 2012, but the department has not evaluated the effectiveness of this compensation in resolving its hiring and retention challenges. During fiscal years 2012 through 2016, Interior had special salary rates, authorized by Congress in annual appropriations acts, that allowed it to pay certain staff up to 25 percent more than their basic pay. Interior stated that in 2013 the Office of Policy, Management and Budget met with officials from OPM, the U.S. Department of Agriculture, the Department of Defense, and the U.S. Army Corp of Engineers to discuss the impacts of expanding oil and gas extraction activities on their recruitment and retention efforts. Interior also stated that the Office of Policy, Management and Budget worked with officials from BLM, BSEE and BOEM to (1) ensure that the three bureaus had the capacity to fund special salary rates through the budget process, (2) develop an integrated special salary rate request to OPM and (3) issue guidance that would provide instruction to human resource officials and hiring managers on its use. Further, Interior stated that, beginning in fiscal year 2013, the Office of Policy, Management and Budget submitted applications to OPM requesting to increase the base salaries for staff in certain positions and geographic locations through a special salary rate. In fiscal years 2015 and 2016, OPM approved Interior’s requests to provide key oil and gas staff in 11 states up to 35 percent more than their basic pay. In addition, some of the bureaus increased compensation through other tools, such as incentive payments and student loan repayments. For example, for fiscal years 2012 through 2014, BLM and BSEE substantially increased the number of staff receiving a retention incentive payment from a total of 14 to a total of 346 employees. During the same period, BSEE and BOEM increased the number of staff receiving a student loan repayment from 25 to 66 employees. (See fig. 3.) As noted earlier in this report, employees receiving incentive payments and student loan repayments must sign an agreement to remain working for the agency for a certain period of time. Service agreements, in addition to the actual monetary payment, may also play a role in retaining staff. Officials from the three bureaus said that these efforts to increase the compensation paid to key oil and gas staff, along with the industry downturn that reduced private sector hiring, had likely helped them fill vacancies. In May 2015, BLM officials said that anecdotally they know that the incentive payments and special salary rates have proven to be somewhat effective and were particularly helpful in recruiting and retaining inspectors. Similarly, in May 2015, BSEE officials said that they had hired more staff in the first part of fiscal year 2015 than in fiscal year 2014, although they noted that they had the most difficulty recruiting petroleum engineers and inspectors in the Gulf of Mexico Region because the pool of prospective candidates was smaller than for other positions. BSEE officials also said that while they lost a fair number of staff in fiscal year 2014, many of those who left did so because of retirements. Senior BOEM officials also reported success in hiring staff, and senior officials said that as of May 2015 the bureau was fully staffed; however, several months later BOEM officials in the Gulf of Mexico Region did report some vacancies. Senior BOEM officials said they had the most difficulty recruiting petroleum engineers, geologists, and geophysicists. Outside of these anecdotal observations, Interior and the bureaus have not evaluated whether these efforts, and the specific tools they used, were effective in hiring and retaining staff. In prior work, we have found that strategic workforce planning requires evaluation of an agency’s progress toward its human capital goals. In November 2014, Interior senior officials told us that they would implement a performance measure framework to evaluate the effectiveness of incentives on a quarterly basis beginning in April 2015. However, as of July 2016, a senior official from the Office of Policy, Management and Budget said these quarterly reviews had not begun yet. In September 2016, officials said they had developed initial performance metrics and gathered data for the first three quarters of fiscal year 2016 and would continue to track and monitor the data on a quarterly basis. However, the agency has not yet used this data to evaluate the effectiveness of incentives. In the absence of these evaluations, Interior cannot determine the extent to which the tools it is using are effective in meeting its goals of hiring and retaining key staff or whether it is expending funds on tools that are not the best use of its limited resources. In addition, without regular evaluations, Interior may not have the information it needs to determine if or how it should alter the tools it uses as the oil and gas market shifts, potentially increasing Interior’s competition with industry for oil and gas staff. Bureau officials acknowledged that retaining newly hired staff may prove difficult when oil and gas market conditions change again and companies increase their hiring efforts. In April 2016, BLM officials noted that while there have been some market-based changes that have proved to be advantageous to the bureaus’ hiring efforts in some locations, the potential for a resurgence in private sector demand for qualified petroleum engineers and inspectors remains a likely probability over the next 12 to 18 months. BLM further noted that since it takes 12 to 18 months to recruit, train, and certify entry-level petroleum engineers and inspectors, losing these staff after they are hired and trained could undermine much of the progress the bureau had made. Because of the importance of key staff for Interior’s oversight of oil and gas development, we developed a statistical model to examine the main factors associated with the likelihood that federal employees in key positions—petroleum engineers, inspectors, geologists, geophysicists, natural resource specialists (or biologists), environmental protection specialists—would leave those positions. While not definitive, the model illustrates the type of analysis that Interior could potentially perform itself—using more detailed and current data—to evaluate the effectiveness of specific tools in retaining key oil and gas staff. For our analysis, we used data mainly from OPM’s EHRI data set, which contains personnel data for civilian federal employees. We supplemented our analysis with data from BLM so that we could identify employees in key positions who were responsible for oil and gas oversight. We used data on approximately 29,000 federal employees throughout the federal government, all of whom were hired into one of the key oil and gas positions during fiscal years 2003 through 2014. Our model estimated the effect that differences in salaries and other compensation had on the likelihood that a federal employee would leave his or her position, while controlling for factors such as the employee’s age, gender, geographic location, and length of time working in that position. We also examined the effect of the performance of the oil and gas market on employee retention. Our results showed that federal employees who received higher adjusted basic pay (which could include a special salary rate), retention payments, student loan repayments, and other additional compensation were less likely to leave than their counterparts working in the same positions who did not receive such compensation. We also found that when the oil and gas market was performing well, federal employees in these positions were more likely to leave their positions. Specifically, for federal employees working in key oil and gas oversight positions, we found the following: Higher adjusted basic pay was significantly associated with a lower likelihood of leaving, with each additional $1,000 reducing the relative odds of leaving by about 2.0 percent. All the categories of other compensation in our model—retention payments, student loan payments, cash awards, and time-off awards —were significantly associated with reducing the likelihood of leaving. Among these categories, the strongest effects were from retention and student loan payments. A higher percentage growth rate of the oil and gas market was significantly associated with a higher likelihood of employees leaving their position. Interior officials we interviewed said that they have difficulty retaining key employees when the oil and gas market is performing well, and our results support this assertion. Conversely, a slower growth of the oil and gas market was associated with fewer employees leaving their positions. Our analysis also showed that natural resource specialists, biologists, and environmental protection specialists were more likely than inspectors to leave their positions. In addition, our analysis showed that BSEE and BLM employees were more likely to leave their positions than federal employees working in the same positions in other federal agencies and other Interior bureaus. This effect was stronger at BSEE than at BLM, with BSEE employees responsible for oil and gas oversight being 50 percent more likely to leave than their counterparts at BLM. However, our results are based on EHRI data from fiscal years 2003 through 2014, the most current EHRI data available to us at the time of our analysis. In comparison, Interior has other data available to it that are more current and detailed. For example, Interior has access to current fiscal year information, which are not yet available in EHRI, on the types and amounts of payments it has given its employees, which would allow the department to conduct a more thorough and precise evaluation of the effect of these payments on retention of key oil and gas staff. Each of the three bureaus has taken steps to begin to address their lengthy hiring process. For example, in 2015 the three bureaus adopted new human resources software that officials said will provide them with better data to track their hiring process. In June 2016, officials from the three bureaus said that they had started analyzing data extracted from this new system to identify steps in the hiring process that may be causing delays. Also in 2016, BSEE and BOEM issued new hiring process guidance to clarify steps in the hiring process for its managers. BSEE and BOEM also provided multiple training classes on the new guidance to ensure that managers understood the process. In addition, in a July 2015 memorandum, BOEM summarized the results of an analysis of its hiring process and identified some improvements that could be made. However, in reviewing the analysis, we identified problems with the data used, such as missing and inaccurately recorded dates. In June 2016, a senior official from Interior’s Office of Policy, Management and Budget said that they were aware of the bureaus’ efforts to analyze their hiring process time. Officials from the three bureaus said that their hiring processes continue to exceed OPM’s goal of 80 days. Some bureau officials also told us that their hiring process sometimes took as long as 190 days. As noted previously, we recommended in January 2014 that Interior systematically collect data on hiring times for key oil and gas positions, ensure the accuracy of the data, and analyze the data in order to identify the causes of delays and expedite the process. However, senior officials from the Office of Policy, Management and Budget did not indicate any plans to look across the bureaus’ efforts in order to help address their shared challenge of a lengthy hiring process. In the absence of such action to address the lengthy hiring processes for the bureaus, they may be losing qualified applicants who accept a different job. We continue to believe that having accurate hiring data and finding ways to reduce the lengthy hiring process are important steps toward resolving Interior’s hiring challenges and may prove especially important if the oil and gas market shifts. Interior’s Office of Policy, Management and Budget has missed opportunities to facilitate collaboration across the three bureaus in addressing their shared challenges in hiring and retaining staff. For example, officials from this office said that they assembled the three bureaus’ requests to OPM for a special salary rate, but we found that they did not facilitate collaboration among the bureaus about which staff should receive a special salary rate. BOEM officials requested the 35 percent special salary rate for certain key oil and gas staff but did not request this special salary rate for its biologists (also referred to as natural resource specialists). In contrast, BLM requested this 35 percent special salary rate for its natural resource specialists along with other positions. BOEM regional managers said that they were not aware that BLM was requesting the special salary rate for its natural resource specialists and did not know that they could request the special salary rate for these staff. BOEM managers said that they learned of this after OPM had already approved these requests. Some of these managers said that had they known BLM was going to request a special salary rate for its natural resource specialists, they probably would have done so too. Some officials said that the bureaus compete with each other for the same pool of applicants and staff. The fact that BLM can pay a natural resource specialist 35 percent more than BOEM may place BOEM at a disadvantage in its recruitment efforts and its ability to retain staff if its natural resource specialists leave to take a comparable position at BLM. In addition, BOEM may also be particularly vulnerable to losing its natural resource specialists to industry, based on the results of our statistical model and comments from BOEM managers, both of which indicated that these staff were more likely to leave their position relative to other key oil and gas staff. Senior officials in Interior’s Office of Policy, Management and Budget did not identify any collaboration mechanisms that they used to bring the three bureaus together to discuss their shared human capital challenges. These officials said the bureaus’ senior managers interact through the meetings of the Deputies Operating Group and Principals Operating Group. However, in our review of the topics discussed by these groups in fiscal year 2015, we found that the bureaus’ hiring and retention challenges were not discussed. In prior work, we have found that collaborative efforts can enable organizations to produce more public value than could be produced when they act alone. To facilitate collaboration, agencies can use a variety of mechanisms, such as interagency groups, communities of practice, and liaison positions. Further, as we have concluded in prior work, leadership is a necessary element for successful collaborative working relationships. Officials from the three bureaus said that they do not have a mechanism, such as a workgroup, in place to collaborate with each other on their shared hiring and retention challenges. In the absence of such a collaboration mechanism, the bureaus have sometimes acted in a fragmented, overlapping, and potentially duplicative fashion to resolve similar hiring and retention challenges. For example, some members of the BSEE and BOEM recruitment teams told us that while they sought to hire staff with similar skills, they participated in recruitment events, such as job fairs, separately and did not give prospective applicants information about career opportunities available at the other bureaus. Officials also said the fact that the bureaus maintained separate recruitment tables was confusing to prospective applicants. Some officials noted that greater collaboration could be useful. For example, some BOEM officials said it would be beneficial if the bureaus had a single booth that could represent all the job opportunities at Interior because the broader range of opportunities and locations might generate more interest among prospective applicants. However, without further leadership from the Office of Policy, Management and Budget to create or use an existing mechanism to facilitate collaboration in addressing hiring and retention, the bureaus may continue to address their shared challenges through fragmented and potentially duplicative efforts, which can waste resources. Interior and its bureaus have trained key oil and gas staff without fully evaluating the bureaus’ staff training needs or the training’s effectiveness, according to officials, and Interior has provided limited leadership in facilitating the bureaus’ sharing of training resources. Specifically, Interior has not evaluated training needs or effectiveness as required by law and regulations, according to officials, and its bureaus have not evaluated training needs or effectiveness as directed by departmental policy. Further, Interior’s Office of Policy, Management and Budget has provided limited leadership in facilitating the sharing of training resources across the bureaus, appearing to miss opportunities that could improve the use of these resources. Interior’s Office of Policy, Management and Budget has not evaluated the three bureaus’ training efforts, contrary to federal law and regulations, according to officials. The Federal Workforce Flexibility Act of 2004 requires agencies to regularly evaluate their training at the department level with respect to accomplishing specific performance plans and strategic goals in performing the agency mission and then modify the training as needed. Similarly, OPM has stated that training and the effective evaluation of training are critical within the federal government, and OPM regulations require agencies to evaluate their training programs annually to identify training needs and assess how well training efforts contribute to accomplishing the agency mission. However, senior officials from the Office of Policy, Management and Budget said that they have not performed these annual evaluations of the bureaus’ staff training needs. In addition, senior officials from this office said they have not requested or received these annual training evaluations from the bureaus even though Interior’s Departmental Manual states that bureaus should conduct such evaluations and submit them to the office. These officials explained that they thought that the 2008 Departmental Manual was old and needed to be revised. However, based on our review of the manual and discussion with an official in Interior’s Office of the Solicitor, we determined that the manual is still in effect. Similarly, the bureaus have not evaluated their oil and gas staff’s training needs to the extent directed by Interior’s policies, according to officials. For example, as noted above, Interior’s Departmental Manual directs each bureau to conduct an annual evaluation of its training program; these evaluations are to determine if the program is effectively meeting identified needs. The manual also states that training programs should identify and address competency gaps, including for technical competencies. Similarly, our guide for assessing training efforts in the federal government states that well-designed training programs are linked to agency goals and to the skills and competencies needed for the agency to perform effectively. However, none of the bureaus have consistently evaluated training needs, according to officials, and only one of the bureaus developed competencies for their key oil and gas staff. The bureaus’ efforts to evaluate training needs and develop competencies include the following: BLM most recently evaluated training needs for its oil and gas staff in 2012 and 2013. BLM evaluated the training needs for its natural resource specialists and environmental protection specialists in 2012, followed by its petroleum engineers, inspectors, and geologists in evaluations that spanned 2012 and 2013. In so doing, BLM did not follow the direction of Interior’s Departmental Manual to conduct annual evaluations. In addition, BLM has not developed technical competencies for its oil and gas staff per OPM and Interior definitions. BSEE has not formally evaluated the training needs of its key oil and gas staff, according to officials. Instead, BSEE officials told us that these training needs are discussed by managers, subject matter experts, and other staff who use this information to identify training courses for staff to take. In addition, BSEE has not developed technical competencies for its key oil and gas staff per OPM and Interior definitions. BOEM has relied on its offices within its three regions to implement its training efforts, and on individual supervisors to evaluate training needs, according to BOEM officials, but BOEM officials told us that the bureau has not formally evaluated the training needs of its key oil and gas staff bureau-wide. These supervisors evaluate training needs of individual employees at the beginning of each fiscal year, and BOEM seeks to address these needs of its staff through vendor- based training, training taught by BOEM staff, and mentoring, according to officials. BOEM has, however, developed competencies per OPM and Interior definitions for its geologists, geophysicists, and petroleum engineers by using ones already published by other sources. Officials from each of the bureaus told us they have not performed annual evaluations of their training needs because officials from the Office of Policy, Management and Budget have not requested them. Without evaluating training needs and developing competencies, Interior cannot ensure that the training it provides for key oil and gas staff is linked to the competencies needed for the agency to perform effectively and that the training addresses any competency gaps. The bureaus also have not evaluated the effectiveness of the training provided to their key oil and gas staff as directed by Interior’s Departmental Manual. The manual states that all formal training courses sponsored by departmental bureaus or offices are expected to be evaluated, and it recommends that bureaus use a five-level evaluation system to assess the effectiveness of their training, with targets for the percentage of courses that should be evaluated at each level. (See fig. 4.) For example, the guidance recommends that all training courses receive level 1 evaluations, which measure student satisfaction and identify ways to improve the training; successively lower percentages of courses are recommended to receive successively higher levels of evaluation. Officials from each of the bureaus told us they have not fully evaluated the effectiveness of their training efforts because either they did not have staff to perform them or Interior did not request it. Collectively, the bureaus conducted varying levels of evaluations, and none reported doing evaluations above level 3, as discussed below: BLM conducts level 1 and 2 evaluations for each course, as well as level 3 evaluations and proficiency examinations for certain courses, according to BLM officials. For example, for its inspector certification training program, comprised of six modules, each inspector is to complete a proficiency examination and complete related field work, according to a BLM handbook. BLM’s inspectors must demonstrate proficiency in each module before they can progress to the next module, according to BLM officials. Following the successful completion of all six modules, inspectors are eligible for certification and, once certified, they are allowed to issue citations to operators when appropriate. BSEE conducts level 1 evaluations for all of its training and its vendors conduct level 2 evaluations to some extent but not to the extent directed by the Departmental Manual, according to BSEE officials. In addition, BSEE has not developed competencies for its inspectors and does not conduct level 3 evaluations for its inspectors to measure how training affected behavior and skills, according to officials. Further, BSEE’s training for inspectors does not include proficiency examinations or certifications, according to officials, as BLM’s training program does. BSEE officials told us that they have not implemented a certification program, although the Outer Continental Shelf Safety Oversight Board and Interior Inspector General recommended it in 2010. By conducting such evaluations and requiring these examinations for certification of inspectors, BSEE could ensure that its inspectors learned and could apply content received in training courses (i.e., were adequately trained). In the absence of such evaluations, BSEE may not be able to verify that its inspectors are adequately trained. BSEE officials told us that they planned to obtain two independent evaluations of their training efforts. According to these officials, the first evaluation, which will review whether the training currently offered to engineers is sufficient, was tentatively scheduled to start in July 2016. The second evaluation will review the bureau’s approach to identifying competencies, training, and possible certification requirements for inspectors and, according to officials, the contract for the work should be awarded by December 2016. As of June 2016, BSEE officials told us that they were finalizing their efforts to initiate the first evaluation and were planning to complete a statement of what work would be included in the second evaluation. BOEM conducts level 1 evaluations when requested by vendors, but BOEM did not report conducting higher-level evaluations. In addition, BOEM officials stated that BOEM does not systematically evaluate training provided by internal BOEM staff, vendors, or others because the bureau does not have staff assigned to training, such as to develop training curricula or evaluate training efforts. None of the bureaus reported conducting level 4 or 5 evaluations, which would give the bureaus information about the overall effectiveness of their training efforts by measuring the impact of training courses on staff’s job performance and comparing program benefits to training costs. During our review, key oil and gas staff we interviewed told us that some courses provided for inspectors were not always effective. For example, BSEE inspectors at four local offices told us in September 2015 that the training courses BSEE provided them, which were primarily led by contractors, did not adequately prepare them to perform inspections because the courses focused on how equipment operates and did not teach them how to inspect the equipment. Similarly, managers from four BSEE offices told us that inspector courses were not entirely relevant and not tailored to inspectors’ responsibilities. For example, one manager said that these training courses do not familiarize inspectors with information they need to perform inspections, such as what to look for when inspecting the equipment. A BSEE training official told us in January 2016 that she had heard this same feedback. In response, BSEE created an extra day of training for some courses, such as their Cranes and Rigging Inspections course that would be led by a BSEE instructor, not a contractor, who would teach the inspectors how to inspect the equipment covered in these courses. Without evaluating its bureaus’ training efforts, Interior may not be able to ensure that its key oil and gas staff are being adequately trained to execute their oversight tasks, and it may not be spending training funds effectively and efficiently. Interior’s Office of Policy, Management and Budget has provided limited leadership in facilitating the sharing of training resources across the bureaus. The Office of Strategic Employee and Organization Development—housed within the Office of Policy, Management and Budget—has objectives that include improving training across the bureaus and facilitating the sharing of training resources, such as training staff expertise and course curricula. However, we identified areas where it appears that the Office of Strategic Employee and Organization Development has missed opportunities to improve the bureaus’ training efforts and facilitate the sharing of training resources. For example, BOEM, which is the smallest of the three bureaus, does not have staff assigned to developing curricula or evaluating training efforts across the bureau and, as discussed earlier, it therefore relies on external vendors for training and evaluates the training when requested by the vendors. In addition, BSEE, which had 6 full-time staff in their Offshore Training Program as of July 2016, according to officials, also relies on external vendors for training and conducting level 2 evaluations. In contrast, as of July 2016, BLM had 59 full-time staff in its National Training Center, and has the capacity to evaluate their training efforts, according to officials. In 2010, the Outer Continental Shelf Safety Oversight Board and Interior’s Inspector General recognized strengths in BLM’s training program for inspectors and recommended that BSEE and BLM consider partnering to establish an Interior-wide inspection certification program. However, neither Interior’s Office of Policy, Management and Budget nor the bureaus evaluated the need for or viability of a joint inspector certification training program, according to officials. Similarly, Interior’s Office of Policy, Management and Budget has not pursued potential opportunities for BOEM and BSEE to share training resources, according to officials. Recognizing that BOEM is a smaller bureau than BSEE, and recognizing the benefits of economies of scale, BOEM has arranged since 2011 to have BSEE’s human resources department service BOEM for select human resource functions, but not training, according to a senior BOEM official. In January 2016, officials from the Office of Policy, Management and Budget said that they were in favor of BOEM using BSEE’s training program, but they had not yet taken any steps toward encouraging such collaboration to facilitate the sharing of resources. In addition, to develop training courses specific to their bureau, BSEE training officials said they would need curriculum developers, which they do not have. As a result, BSEE officials said they rely almost exclusively on external off-the-shelf courses taught by contractors. In contrast, BLM’s training center has about six full-time curriculum developers, according to officials. BLM training officials said that these curriculum developers would be able to develop training curricula for BSEE if they worked alongside subject matter experts from BSEE. However, officials told us that the Office of Policy, Management and Budget has not taken any steps to encourage collaboration in this area. Senior officials from the Office of Policy, Management and Budget acknowledged that their office has not effectively facilitated the sharing of training resources across the bureaus as of June 2016. As we mentioned earlier, we found in prior work that to facilitate collaboration, agencies can use a variety of mechanisms, such as interagency groups, communities of practice, and liaison positions; that leadership is a necessary element for successful collaborative working relationships; and that collaborative efforts can enable organizations to produce more public value than could be produced when they act alone. In January 2016, a senior official from the Office of Policy, Management and Budget said that their focus in the previous fiscal year had been to assist the bureaus in obtaining a special salary rate for their key oil and gas staff. Another senior Interior official said that in January 2016 their Interior Training Directors Council— composed of senior training officials across Interior—would begin reviewing training across the bureaus and seek to identify opportunities to share training resources. According to its charter, the goal of the council is to facilitate a partnership across the bureaus in order to maximize the effectiveness and efficiency of training efforts throughout the Department of the Interior. In March 2016, the council, which had previously operated as a community of practice since 2001, shifted to a more formal structure that would allow it to develop policy and make recommendations to Interior’s Human Capital Officers, according to a senior official. However, as of June 2016, officials had not reported any progress made by the council, and it is unclear what, if any, steps the office has taken to review training and identify opportunities to share training resources. Without further leadership from the Office of Policy, Management and Budget to create or make better use of an existing mechanism that effectively facilitates collaboration across the bureaus and helps them identify opportunities to share training resources, Interior and its bureaus may not be spending training funds effectively and efficiently. Since 2012, Interior has taken steps toward resolving its challenges in hiring and retaining key oil and gas staff, who are the front line in providing effective oversight of activities related to federal oil and gas resources. Notably, to hire and retain such staff, Interior’s bureaus have invested increasing resources into compensating them through special salary rates, incentive payments, and student loan repayments—tools that can help bridge the gap between federal salaries and those paid by industry. We recommended in January 2014 that Interior explore the expanded use of existing authorities, such as recruitment incentives, and develop clear guidance for how the effectiveness of their use will be assessed, among other things. Interior has partially responded to this recommendation by its increased use of incentives, but it has not evaluated their effectiveness. Interior also has not evaluated the effectiveness of other tools, specifically the special salary rates and student loan repayments. We developed a statistical model that Interior could expand upon to analyze the effectiveness of specific tools. In the absence of such evaluations, Interior cannot know the extent to which the increased use of incentive payments, special salary rates, and student loan repayments have been effective in hiring and retaining key staff. In addition, without regular evaluation, Interior may not have information it needs to determine if or how it should alter its approach when the oil and gas market shifts and industry begins hiring more employees, potentially increasing Interior’s competition with industry for oil and gas staff. Further, Interior continues to face a lengthy hiring process, according to officials. In January 2014, we also recommended that Interior systematically collect data on hiring times for key oil and gas positions, ensure the accuracy of the data, and analyze the data to identify the causes of delays and expedite the hiring process. All three bureaus have adopted new human resources software that may provide them with better data to track their hiring process, and the bureaus have started to analyze these data to identify what steps are causing delays in the hiring process. We continue to believe that having accurate hiring data and finding ways to reduce the lengthy hiring process are important steps toward resolving Interior’s hiring challenges and may prove especially important if the oil and gas market shifts. Concerning training, Interior has not evaluated the bureaus’ training needs or the training’s effectiveness as required by federal law and regulations, and the bureaus’ have not fully evaluated their training efforts as directed by Interior policy. None of the bureaus have consistently performed annual evaluations of their training needs for all key staff, and only one of the bureaus has developed technical competencies that are critical to successful performance by these staff, as directed by Interior’s Departmental Manual. Without evaluating training needs and developing such competencies, Interior cannot ensure that the training it provides for key oil and gas staff is linked to the competencies needed for the agency to perform effectively and that the training addresses any competency gaps. In addition, none of the bureaus have evaluated the effectiveness of the training as directed by the Departmental Manual. Because Interior and its bureaus have not fully evaluated their training efforts, Interior may not be able to ensure that its key oil and gas staff are being adequately trained to execute their oversight tasks, and it may not be spending training funds effectively and efficiently. BLM’s inspector certification training program stands out as an exception to these general findings because BLM has evaluated inspectors’ training to ensure that they have learned and can apply skills critical to their oversight duties. In contrast, BSEE does not give inspectors proficiency examinations to measure learning or application of skills, and does not certify them, as recommended by two oversight bodies in 2010. Although BSEE officials said they were finalizing their efforts to initiate the first evaluation of their training efforts and were planning to complete a statement of what work would be included in the second evaluation, unless they follow through with and complete these efforts, the bureau cannot verify that its inspectors are adequately trained. Moreover, the Office of Policy, Management and Budget, which is responsible for managing Interior’s human resources and addressing cross-cutting issues, has not effectively facilitated collaboration among the bureaus in addressing their shared hiring, retention, and training challenges. Senior officials in Interior’s Office of Policy, Management and Budget did not identify any collaboration mechanisms currently being used to bring the three bureaus together to discuss their shared human capital challenges and to share training resources. In the absence of such a collaboration mechanism, the bureaus have sometimes acted in a fragmented, overlapping, and potentially duplicative fashion to resolve similar challenges. Without further leadership from the Office of Policy, Management and Budget to create or make better use of an existing mechanism, such as the Deputies Operating Group, Principals Operating Group, or the Interior Training Directors Council, to facilitate collaboration in hiring, retention, and training, the bureaus may continue to address their shared challenges through fragmented and potentially duplicative efforts. To help ensure Interior can hire, retain, and train staff it needs to provide effective oversight of oil and gas activities on federal lands and waters, we recommend that the Secretary of the Interior take the following five actions: Direct the Assistant Secretary for Policy, Management and Budget to: Regularly evaluate the effectiveness of its available incentives, such as special salary rates, the student loan repayment program, and other incentives in hiring and retaining key oil and gas staff. Annually evaluate the bureaus’ training programs, including: staff training needs, potential opportunities for the bureaus to share training resources. Direct the Assistant Secretary for Land and Minerals Management to: Develop technical competencies for all key oil and gas staff. Evaluate the need for and viability of a certification program for BSEE inspectors. Direct the Assistant Secretary for Policy, Management and Budget to coordinate with the Assistant Secretary for Land and Minerals Management to create or use an existing mechanism, such as the Deputies Operating Group, Principals Operating Group, or the Interior Training Directors Council, to facilitate collaboration across the three bureaus in addressing their shared hiring, retention, and training challenges. We provided our draft report to Interior for review and comment. Interior provided written comments, in which it agreed with one of the five recommendations in the draft report, partially agreed with three others, and disagreed with the remaining recommendation. Interior’s comments are reproduced in appendix II, and key clarifying points from the department are discussed below in the context of our recommendations. Interior also provided technical comments, which we incorporated as appropriate. Interior agreed with our first recommendation, which would have the Assistant Secretary for Policy, Management and Budget regularly evaluate the effectiveness of its available incentives. Interior also submitted several points of clarification and comments regarding our related findings: Interior clarified that it now has full approval for the special salary rates. Interior also provided documents showing performance metrics it would use to track and monitor the impact of special pay rates and other pay flexibilities, such as incentive payments. We added language to our report to further acknowledge these actions. Interior disagreed (1) with the accuracy of how the report portrayed the Office of Policy, Management and Budget’s role and (2) that the office had missed opportunities to collaborate across the bureaus, especially as it related to special salary rates for key positions. Interior stated that the office was an integral partner, collaborator, and coordinator among the departmental stakeholders and the bureaus' leadership, human capital and budget teams. In response to Interior’s comments, we added language to specifically identify the Office of Policy, Management and Budget’s role and actions in the special salary process. Regarding missed opportunities, Interior disagreed that BOEM was excluded from the collaborative process for the special salary requests. In the draft report, we did not state that BOEM was excluded but rather that BOEM regional managers said they were not aware that BLM was requesting the special salary rate for its natural resource specialists and did not know that they could do so. Therefore, while Interior stated that officials from the Office of Policy, Management and Budget said that the office collaborated and coordinated between departmental stakeholders, it appears not all stakeholders were equally informed. Interior stated that BLM's inclusion of natural resource specialists does not have a negative impact on BOEM mission delivery. We did not address such an impact in our report. We did state, however, that since BLM can pay a natural resource specialist 35 percent more than BOEM can, this difference may place BOEM at a disadvantage in its recruitment efforts and its ability to retain staff if its natural resource specialists leave to take a comparable position at BLM. Interior stated that the data demonstrated that the greatest need for BLM to acquire natural resource specialists was within the North Dakota region and that BOEM does not maintain offices in that region. However, BLM also offers the special salary rate for natural resource specialists in other states where BOEM does maintain offices. In addition, federal employees could relocate from one state to another state in order to take a new job. Interior partially agreed with our second recommendation, to have the Assistant Secretary for Policy, Management and Budget annually evaluate the bureaus’ training programs, including training needs, training effectiveness and potential opportunities for the bureaus to share training resources. Interior said that the Office of Policy, Management and Budget would ensure that the three bureaus are coordinating their training needs and that its Office of Strategic Employee and Organizational Development can validate the bureaus’ engagement in this activity and provide support in fulfilling these recommendations. While these steps may be useful, as stated in the report, Interior has not evaluated the bureaus’ training needs or the training’s effectiveness as required by federal law and regulations, and the bureaus have not fully evaluated their training efforts as directed by Interior policy. We continue to believe that the Office of Policy, Management and Budget is required by law and regulation to evaluate the bureaus’ training programs. Without evaluating the bureaus’ training programs, Interior cannot ensure that the training provided is sufficient to support the required oversight duties. Interior also submitted several points of clarification and comments regarding related findings: Interior stated that our report assumed that BOEM and BSEE should be acquiring technical training from BLM, which, according to Interior, does not accurately reflect the analysis conducted to determine the training needs for offshore development or recognize the training coordination that does occur. Relatedly, Interior stated that we did not acknowledge the vastly different skill sets needed to inspect or permit equipment needed for onshore versus offshore facilities. However, our draft report did not state or assume that BLM would be training these bureaus, and we did not recommend such an action. We did state that it appears that Interior missed opportunities to improve the bureaus’ training efforts and facilitate the sharing of training resources in areas, such as curricula development, which led to our second recommendation. With regard to the differences in skill sets needed for inspections, our interviews with agency officials support the point that there are differences in these two inspection environments. We added language to our report to better acknowledge these differences. Nonetheless, our interviews also indicate that there are common skills and knowledge used to inspect onshore and offshore facilities. This point is illustrated by the fact that 15 BSEE staff took one or more of BLM’s inspector certification training modules from fiscal year 2012 through fiscal year 2015, according to BLM documentation. We added language to our report to recognize that BSEE staff took this BLM training. Interior stated that our draft report did not recognize the training and coordination that occurs and described collaborative efforts between BSEE and BLM regarding training. We added language to our report to recognize the BSEE staff who took BLM training. Nonetheless, it appears that the Office of Policy, Management and Budget has missed opportunities to facilitate the sharing of training resources, and we continue to believe that there is a need for the type of evaluations called for in our recommendation. Once the bureaus have made these evaluations, they should be better able to identify overlapping skill sets which could then be addressed by sharing training resources. Interior also noted, with regard to BSEE training, that it would be difficult and expensive to continuously update standard certification modules and tests to keep pace with the technology changes in the offshore oil and gas industry. Interior stated that BSEE therefore chose to rely on vendors, rather than in-house expertise, to provide classroom training. However, based on our review, none of the bureaus has performed a level 5 evaluation, which would compare the benefits and costs of training. As a result, the bureaus do not know whether it would be cost effective to update certification modules rather than continue the current reliance on vendors. Interior partially agreed with our third recommendation that directed the Assistant Secretary for Policy, Management and Budget to develop technical competencies for all key oil and gas staff. In its comments, Interior said that because oil and gas occupations are highly technical positions, the bureaus would be best positioned to identify technical competencies. We agree and have redirected our recommendation to the Assistant Secretary for Land and Minerals Management, where the three bureaus are housed. Interior disagreed with our fourth recommendation that directed the Assistant Secretary for Policy, Management and Budget to evaluate the need for and viability of a certification program for BSEE inspectors. Regarding this recommendation, Interior said that oil and gas inspection is highly technical and that BSEE was in the best position to evaluate the technical training needed to carry out its authorities and responsibilities. Based on this comment, we have redirected this recommendation to the Assistant Secretary for Land and Minerals Management. Concerning our related findings, Interior stated that the report does not recognize that although BSEE Level II inspectors do not receive a formal certificate, they receive a hands-on personal evaluation and approval from a supervisory inspector. According to Interior, this supervisory approval confirms that the Level II inspector attained all of the knowledge necessary through course work and supervised on-the-job training—and, more importantly, that the inspector sufficiently demonstrated these skills in the field—to become a Level III inspector. Although our current review of training focused on technical training delivered through classroom instruction and did not directly include an evaluation of on-the-job training, we agree that such efforts are an important part of an inspection training program. However, in July 2012, we reported that senior and regional office officials stated that relying on a combination of on-the-job training, which included pairing senior inspectors with newly hired inspectors, and some classroom instruction produced inconsistent results because some senior inspectors proved to be less effective trainers than others. We believe that BLM’s model of training inspectors through a certification program may offer some advantages over BSEE’s current approach, and we continue to believe that the need for and viability of such a program for BSEE inspectors should be evaluated. Interior partially agreed with our fifth recommendation that directed the Assistant Secretary for Policy, Management and Budget to coordinate with the Assistant Secretary for Land and Minerals Management to create or use an existing mechanism to facilitate collaboration across the three bureaus in addressing their shared hiring, retention, and training challenges. Interior stated that coordination already exists among the bureaus and that, as part of the Office of Policy, Management and Budget’s quarterly review of performance data, the office will ensure that the bureaus continue to coordinate on hiring, retention, and training. However, Interior disagreed with our statement that the Office of Policy, Management and Budget has missed opportunities to collaborate across bureaus to address recruitment and retention challenges. Our report identifies examples of missed opportunities for collaboration, including BSEE and BOEM recruitment teams who, according to team members, participated in recruitment events such as job fairs separately and did not give prospective applicants information about career opportunities available at the other bureaus, even though they sought to hire staff with similar skills. Because of these findings, we continue to believe that the Office of Policy, Management and Budget should take a greater leadership role in facilitating collaboration to address shared challenges across the bureaus. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To examine the Department of the Interior (Interior) efforts to resolve its hiring and retention challenges for key oil and gas staff, we developed a statistical model to examine the main factors that would reduce the likelihood that federal employees in key positions—those that corresponded to the positions of key oil and gas staff at the Bureau of Land Management (BLM), the Bureau of Safety and Environmental Enforcement (BSEE), and the Bureau of Ocean Energy Management (BOEM) —would leave those positions. We developed a model to examine the main factors associated with employee retention for key oil and gas-related employees at Interior. We analyzed the probability of retention of federal employees hired on a permanent basis into key oil and gas occupations from fiscal years 2003 through 2014. We used the Enterprise Human Resources Integration (EHRI) database, which contains information on variables such as adjusted basic pay, occupation, the agency where the employee worked, hiring, separation, and awards. We supplemented the EHRI data with data from the Standard & Poor’s 500 Energy Index (to measure demand from the private sector for these key employees) and with data from BLM to identify specifically those employees working in oil- and gas-related positions. Our model considered only federal employees who were hired as either career competitive, conditional competitive, or career excepted; thus, other type of hires, such as transfers-in or temporary hires, were not included. We included only employees in our list of “key occupations” throughout the federal government. In order to simplify our analysis, we did not include employees with multiple periods of employment; that is, we only considered those employees who were hired one time from 2003 through 2014. Employees who were hired more than once accounted for only about 2 percent of the total number of hires during that time. In order to be comprehensive and include separations other than just resignations, we also included as “Quits” employees who had an inter-agency transfers, either horizontal (same grade) or upward movement (higher grade). Employees who separated for other reasons, such as retirement or death, or who were still employed at the end of fiscal year 2014, were treated as “Censored” by the model and no account was taken of the difference in these types of “Exits” from the analysis. However, in order to mitigate the effect on our model of possible separations due to retirement or death, we excluded employees who were 50 or older at the time they were hired. is the cumulative logistic probability distribution describing the probability of i-th employee quitting at time (month) t and z(i,t) is a list (vector) of variables that are believed to be associated with the i-th employee’s probability of quitting at time t. Each employee is in the study for Tmonths and the data comprise each employee-month between the time an employee was hired and the time that they either quit or they were censored out of the study. We used the following explanatory variables in our model: The employee’s age at the time they were hired. The employee’s gender. The organization where the employee worked. We split this category BLM employees in the key occupations who were also identified by BLM as performing oil- and gas-related work. Other BLM employees in the key occupations. BOEM employees in key occupations for post-2011. BSEE employees in key occupations for post-2011. BOEMRE/MMS employees in key occupations through 2011. Note that since we are using time-varying covariates, this category changed starting in 2012 for any employee who was employed during the redefinition of sub-agency organizations and consequent reorganization. Other Department of Interior employees in key occupations. Federal government agencies other than Interior employees in key occupations. The frequency with which an employee received an award; specifically, the number of awards in a given fiscal year per month employed (at risk) in that fiscal year. We included the following award categories: Adjusted basic pay (salary) for the fiscal year. Geographic location; specifically, the U.S. Census Division where the employee’s duty station was located. A set of time dummy variables indicating the employment duration quarter for a given employee; that is, a dummy for any employee in their first quarter of employment, a dummy for any employee in their second quarter of employment and so on, up to a maximum of 47 dummies (there are 48 quarters from the start of 2003 to the end of 2014 so this allowed for 48 minus one dummy variables). The percentage growth rate of the Standard & Poor’s 500 Energy Index, which measured the health of the private energy sector and consequent source of possible demand for federal employees in the key occupations. A detailed set of results is shown in table 4. The main results pertinent to our study were as follows: All the awards variables except for student loan payments were significantly associated with lowering the probability of quitting. The student loan payments were significant at about the 6 percent level but we hypothesized that these loan payments are more likely to go to younger employees. This hypothesis was supported by our results when we ran a second model that included an interaction term between student loan payments and employees’ age when they were hired. In this second model, the student loan payments were significant and associated with a lower probability of quitting and the interaction term was positive, suggesting the effect on reducing the probability of quitting is greater for younger employees. Higher adjusted basic pay (salary) was significantly associated with a lower probability of quitting with the odds ratio higher by 1.8 percent for each additional $1,000 in salary. A faster growing private energy sector, as measured by the growth in the Standard & Poor’s 500 Energy Index, was significantly associated with a higher probability of quitting. This supports the hypothesis that key occupation employees are attracted away from federal employment when the private energy sector is performing well. Organization results, relative to the base case; namely, key occupation employees outside Interior, the following groups had a significantly higher likelihood of quitting; BLM employees identified by BLM as key oil- and gas-related employees. BSEE key occupation employees. Other (outside BOEM, BSEE, and BLM) Interior key occupation employees. Occupation results, relative to the base case; namely, General Inspection, Investigation and Compliance occupation, the following occupations had a higher likelihood of quitting: General Natural Resource Management and Biological Scientists. Environmental Protection Specialists. In addition to the individual named above, Dan Haas (Assistant Director), John Barrett, Mark Braza, Scott Bruckner, Antoinette Capaccio, Michael Kendix, Angela Miles, and Cynthia Norris made significant contributions to this report. Also contributing to this report were David Bennett, Andrew Berglund, Ashely Chaifetz, Eric Charles, Keya Chateauneuf, Clifton Douglas, Glenn Fischer, Tom Gilbert, Paige Gilbreath, Holly Hobbs, Steven Lozano, Sarah Martin, Gloria Ross, Lillian Slodkowski, Matt Tabbert, Sarah Veale, Amy Ward-Meier, Michelle Wong, and Arvin Wu.
The explosion onboard the Deepwater Horizon drilling rig in April 2010 highlighted the importance of effective oversight of oil and gas activities, but Interior has faced challenges in hiring, retaining, and training staff responsible for such oversight. Since 2011, Interior's management of federal oil and gas resources has been on GAO's list of program areas that are at high risk, partly because of human capital challenges. In a February 2015 update to the list, GAO found that Interior had begun to address these challenges but needed to do more. GAO was requested to review the status of Interior's human capital challenges. This report examines Interior's efforts to (1) resolve its hiring and retention challenges for key oil and gas staff and (2) address its training needs for such staff. GAO reviewed regulations, reports, and department documents; analyzed Interior and OPM information; and interviewed department officials. The Department of the Interior has taken steps to resolve its hiring and retention challenges for key staff engaged in oil and gas activities, but it has not evaluated the effectiveness of its efforts and has missed opportunities to collaborate within the department for resolving these challenges. Specifically, Interior has taken steps to address two underlying factors—lower salaries and a lengthier hiring process compared with industry—that impede its ability to hire and retain such staff. For example, in fiscal year 2012 Interior began using special salary rates to give higher pay to certain key staff in its bureaus that oversee oil and gas resources: the Bureau of Land Management (BLM), Bureau of Safety and Environmental Enforcement (BSEE), and Bureau of Ocean and Energy Management (BOEM). To bolster compensation further, some bureaus increased the number of staff receiving student loan repayments and other incentives. Officials said these efforts in fiscal year 2015 filled positions, but they had not evaluated the effectiveness of their efforts. As a result, Interior cannot determine how or whether it should alter its approach. Regarding the lengthy hiring process, the bureaus recently adopted new human resources software that may provide them with better data to track their hiring process. As the bureaus sought to improve hiring and retention, Interior's Office of Policy, Management and Budget—which is charged with managing human resources and addressing cross-cutting issues—missed opportunities to facilitate collaboration across the bureaus. For example, two bureaus used separate recruitment teams that did not collaborate. Senior officials in the office did not identify any collaboration mechanism that they used to bring the bureaus together to discuss shared challenges. Without such a mechanism, the bureaus may continue to address these challenges through fragmented and potentially duplicative efforts. Interior has trained key oil and gas staff without fully evaluating the bureaus' staff training needs or the training's effectiveness, according to officials, and Interior has provided limited leadership in facilitating the bureaus' sharing of training resources. The Federal Workforce Flexibility Act of 2004 and Office of Personnel Management (OPM) regulations require agencies to evaluate their training efforts, but Interior's Office of Policy, Management and Budget has not performed these evaluations. In addition, none of the bureaus have evaluated training, according to officials, and only one developed technical competencies for staff as directed in Interior's Departmental Manual. Further, BSEE's training for inspectors does not include proficiency examinations or certifications, according to officials, although two oversight bodies recommended implementing a certification program in 2010. Interior has provided limited leadership in facilitating the sharing of training resources across the bureaus, appearing to miss opportunities that could improve the use of these resources. For example, BOEM does not have staff to develop curricula or evaluate training efforts and, as of July 2016, BSEE had 6 full-time staff in their training program, according to officials. These bureaus conduct limited evaluations. In contrast, BLM had 59 staff in its training program and has the capacity to evaluate their training efforts, according to officials. Without further evaluation and leadership, Interior may not be able to ensure key oil and gas staff are adequately trained for their oversight tasks, and the bureaus may miss opportunities to share resources. GAO is recommending that Interior evaluate the effectiveness of special salary rates and incentives, evaluate its bureaus' training programs, develop technical competencies for all key oil and gas staff, evaluate the need for a BSEE inspector certification program, and better facilitate collaboration across the bureaus. Interior agreed with one recommendation, partially agreed with 3 others, and disagreed with one recommendation. GAO continues to believe that the recommendations are valid, as discussed in the report.
NRC is responsible for ensuring that the nation’s 103 operating commercial nuclear power plants pose no undue risk to public health and safety. Now, however, the electric utility industry is faced with an unprecedented, overarching development: the economic restructuring of the nation’s electric power system, from a regulated industry to one driven by competition. According to one study, as many as 26 of the nation’s nuclear power plant sites are vulnerable to shutdown because production costs are higher than the projected market prices of electricity. As the electric utility industry is deregulated, operating and maintenance costs will affect the competitiveness of nuclear power plants. NRC acknowledges that competition will challenge it to reduce unnecessary regulatory burden while ensuring that safety margins are not compromised by utilities’ cost-cutting measures. Since the early 1980s, NRC has been considering the role of risk in the regulatory process, and in August 1995, NRC issued a policy statement that advocated certain changes in the development and implementation of its regulations through an approach more focused on risk assessment. Under such an approach, NRC and the utilities would give more emphasis to those structures, systems, and components deemed more significant to safety. The following example illustrates the difference between NRC’s existing and a risk-informed approach. One particular nuclear plant has about 635 valves and 33 pumps that the utility must operate, maintain, and periodically replace according to NRC’s existing regulations. Under a risk-informed approach, the utility found that about 515 valves and 12 pumps presented a low safety risk. The utility identified 25 components that were a high risk but would have been treated the same as other components under the existing regulations. If the utility concentrated on the 120 valves, 21 pumps, and 25 components that have been identified as having a high safety risk, it could reduce its regulatory compliance burden and costs. NRC staff estimate that it could take 4 to 8 years to implement a risk-informed regulatory approach and are working to resolve many issues to ensure that the new approach does not endanger public health and safety. Although NRC has issued guidance for utilities to use risk assessments to meet regulatory requirements for specific activities and has undertaken many activities to implement a risk-informed approach, more is needed to ensure that utilities have current and accurate documentation on the design of the plant and structures, systems, and components within it and final safety analysis reports that reflect changes to the design and other analyses conducted after NRC issued the operating license. ensure that utilities make changes to their plants based on complete and accurate design and final safety analysis information. determine whether, how, and what aspects of NRC’s regulations to change. develop standards on the scope and detail of the risk assessments needed for utilities to determine that changes to their plants’ design will not negatively effect safety. determine whether compliance with risk-informed regulations should be mandatory or voluntary. Furthermore, NRC has not developed a comprehensive strategy that would move its regulation of nuclear plant safety from its traditional approach to an approach that considers risk. Design information provides one of the basis for NRC’s safety regulation. Yet, for more than 10 years, NRC has questioned whether utilities had accurate design information for their plants. Inspections of 26 plants that NRC completed early in fiscal year 1999 confirmed that for some plants (1) utilities had not maintained accurate design documentation, (2) NRC did not have assurance that safety systems would perform as intended at all times, and (3) NRC needed to clarify what constitutes design information subject to NRC’s regulations. As of November 1998, NRC had taken escalated enforcement actions for violations found at five plants—Three Mile Island, Perry, H.B. Robinson, Vermont Yankee, and D.C. Cook. NRC took these actions because it did not have assurance that the plants’ safety systems would perform as intended. One utility, American Electric Power, shut down its D.C. Cook plant as a result of the inspection findings. NRC does not plan additional design team inspections because it concluded that the industry did not have serious safety problems. NRC’s Chairman disagreed with this broad conclusion, noting that (1) the inspection results for the five plants indicate the importance of maintaining current and accurate design and facility configuration information, (2) the inspections did not apply to the industry as a whole but to only certain utilities and plants within the industry, and (3) other NRC inspections identified design problems at other such nuclear power plants as Crystal River 3, Millstone, Haddam Neck, and Maine Yankee. The Commissioners and staff agreed that NRC would oversee design information issues using such tools as safety system engineering inspections. The 26 inspections also identified a need for NRC to better define the elements of a plant’s design that are subject to NRC’s regulations. NRC staff acknowledge that the existing regulation is a very broad, general statement that has been interpreted differently among NRC staff and among utility and industry officials. According to NRC staff, it is very difficult to develop guidance describing what constitutes adequate design information. Therefore, NRC has agreed that the Nuclear Energy Institute (NEI) would provide explicit examples of what falls within design parameters. NEI plans to draft guidance that will include examples of design information and provide it to NRC in January 1999. Concurrently, NRC is developing regulatory guidance on design information. NRC staff expect to recommend to the Commission in February 1999 that it endorse either NRC’s or NEI’s guidance and seek approval to obtain public comments in March or April 1999. NRC staff could not estimate when the agency would complete this effort. At the time NRC licenses a plant, the utility prepares a safety analysis report; NRC regulations require the utility to update the report to reflect changes to the plant design and the results of analyses that support modifying the plants without prior NRC approval. As such, the report provides one of the foundations to support a risk-informed approach. Yet, NRC does not have confidence that utilities make the required updates, which results in poor documentation of the safety basis for the plants. NRC published guidance for the organization and contents of safety analysis reports in June 1966 and updated the guidance in December 1980. NRC acknowledges that the guidance is limited, resulting in poorly articulated staff comments on the quality of the safety analysis reports and a lack of understanding among utilities about the specific aspects of the safety analysis reports that should be updated. On June 30, 1998, NRC directed its staff to continue working with NEI to finalize the industry’s guidelines on safety analysis report updates, which NRC could then endorse. Once the agency endorses the guidelines, it will obtain public comments and revise them, if appropriate. NRC expects to issue final guidelines in September 1999. According to NRC documents, if a utility does not have complete and accurate design information, the evaluations conducted to determine whether it can modify a plant without prior NRC approval can lead to erroneous conclusions and jeopardize safety. For more than 30 years, NRC’s regulations have provided a set of criteria that utilities must use to determine whether they may change their facilities (as described in the final safety analysis report) or procedures or conduct tests and experiments without NRC’s prior review and approval. However, in 1993, NRC became aware that Northeast Nuclear Energy Company had refueled Millstone Unit 1 in a manner contrary to that allowed in the updated final safety analysis and its operating license. This led NRC to question the regulatory framework that allows licensees to change their facilities without prior NRC approval. As a result, NRC staff initiated a review to identify the short- and long-term actions needed to improve the process. For example, in October 1998, NRC published a proposed regulation regarding plant changes in the Federal Register for comment; the comment period ended on December 21, 1998. NRC requested comments on criteria for identifying changes that require a license amendment and on a range of options, several of which would allow utilities to make changes without prior NRC approval despite a potential increase in the probability or consequences of an accident. NRC expects to issue a final regulation in June 1999. In addition, in February 1999, NRC staff expect to provide their views to the Commission on changing the scope of the regulation to consider risk. NRC’s memorandum that tracks the various tasks related to a risk-informed approach and other initiatives did not show when NRC would resolve this issue. Until recently, NRC did not consider whether and to what extent the agency should revise all its regulations pertaining to commercial nuclear plants to make them risk-informed. Revising the regulations will be a formidable task because, according to NRC staff, inconsistencies exist among the regulations and because a risk-informed approach focuses on the potential risk of structures, systems, or components, regardless of whether they are located in the plant’s primary (radiological) or secondary (electricity-producing) systems. With one exception, NRC has not attempted to extend its regulatory authority to the secondary systems. NRC staff and NEI officials agree that the first priority in revising the regulations will be to define their scope as well as the meaning of such concepts as “important to safety” and “risk significant” and integrating the traditional and risk-informed approaches into a cohesive regulatory context. In October 1998, NEI proposed a phased approach to revise the regulations. Under the proposal, by the end of 1999, NRC would define “important to safety” and “risk significant.” By the end of 2000, NRC would use the definitions in proposed rulemakings for such regulations as definition of design information and environmental qualification for electrical equipment. By the end of 2003, NEI proposes that NRC address other regulatory issues, such as the change process, the content of technical specifications, and license amendments. After 2003, NEI proposes that NRC would address other regulations on a case-by-case basis. NRC staff agreed that the agency must take a phased approach when revising its regulations. The Director, Office of Nuclear Regulatory Research, said that, if NRC attempted to revise all provisions of the regulations simultaneously, it is conceivable that the agency would accomplish very little. The Director said that NRC needs to address one issue at a time while concurrently working on longer-term actions. He cautioned, however, that once NRC starts, it should be committed to completing the process. At a January 1999 meeting, NRC’s Chairman suggested a more aggressive approach that would entail risk informing all regulations across the board. NRC’s memorandum that tracks the various tasks related to a risk-informed approach and other initiatives did not show when the agency would resolve this issue. NRC and the industry view risk assessments as one of the main tools to be used to identify and focus on those structures, systems, or components of nuclear plant operations having the greatest risk. Yet, neither NRC nor the industry has a standard or guidance that defines the quality, scope, or adequacy of risk assessments. NRC staff are working with the American Society of Mechanical Engineers to develop such a standard. However, this issue is far from being resolved. The Society is developing the standard for risk assessments in two phases (internal events and emergency preparedness). NRC staff estimate that the agency would have a final standard on the first phase by June 2000 but could not estimate when the second phase would be complete. To ensure consistency with other initiatives, in December 1998, NRC staff requested the Commission’s direction on the quality of risk assessments needed to implement a risk-informed approach. Since it may be several years until NRC has a standard, the Commission should also consider the effect that the lack of a standard could have on its efforts to implement a risk-informed regulatory approach. NRC has not determined whether compliance with revised risk-informed regulations would be mandatory or voluntary for utilities. In December 1998, NRC’s staff provided its recommendations to the Commission. The staff recommended that implementation be voluntary, noting that it would be very difficult to show that requiring mandatory compliance will increase public health and safety and could create the impression that current plants are less safe. In its analysis, the staff did not provide the Commission with information on the number of plants that would be interested in such an approach. In January 1999, the Commissioners expressed concern about a voluntary approach, believing that it would create two classes of plants operating under two different sets of regulations. Utilities may be reluctant to shift to a risk-informed regulatory approach for various reasons. First, the number of years remaining on a plant’s operating license is likely to influence the utility’s views. NRC acknowledged that if a plant’s license is due to expire in 10 years or less, then the utility may not have anything to gain by changing from the traditional approach. Second, the costs to comply may outweigh the benefits of doing so. Considering the investment that will be needed to develop risk-informed procedures and operations and identify safety-significant structures, systems, or components, utilities question whether a switch will be worth the reduction in regulatory burden and cost savings that may result. Third, design differences and age disparities among plants make it difficult for NRC and the industry to determine how, or to what extent, a standardized risk-informed approach can be implemented across the industry. Although utilities built one of two types of plants—boiling water or pressurized water—each has design and operational differences. Thus, each plant is unique, and a risk-informed approach would require plant-specific tailoring. Since the early 1980s, NRC has considered applying risk to the regulatory process. NRC staff estimate that it will be at least 4 to 8 years before the agency implements a risk-informed approach. However, NRC has not developed a strategic plan that includes objectives, time lines, and performance measures for such an approach. Rather, NRC has developed an implementation plan, in conjunction with its policy statement on considering risk, that is a catalog of about 150 separate tasks and milestones for their completion. It has also developed guidance for some activities, such as pilot projects in the four areas where the industry wanted to test the application of a risk-informed approach. In one case, NRC approved a pilot project for Houston Lighting and Power Company at its South Texas plant, and the utility found that it could not implement it because the pilot project would conflict with other NRC regulations. Given the complexity and interdependence of NRC’s requirements, such as regulations, plant design, and safety documents and the results of ongoing activities, it is critical that NRC clearly articulate how the various initiatives will help achieve the goals set out in the 1995 policy statement. Although NRC’s implementation plan sets out tasks and expected completion dates, it does not ensure that short-term efforts are building toward NRC’s longer-term goals; does not link the various ongoing initiatives; does not help the agency determine appropriate staff levels, training, skills, and technology needed and the timing of those activities to implement a risk-informed approach; does not provide a link between the day-to-day activities of program managers and staff and the objectives set out in the policy statement; and does not address the manner in which it would establish baseline information about the plants to assess the safety impact of a risk-informed approach. In a December 1998 memorandum, NRC staff said that once the Commission provides direction on whether and how to risk-inform the regulations and guidance on the quality of risk assessments to support their decisions for specific regulations, they would develop a plan to implement the direction provided. The staff did not provide an estimated time frame for completing the plan. For many years, the nuclear industry and public interest groups have criticized NRC’s plant assessment and enforcement processes because they lacked objectivity, consistency, and predictability. In January 1999, NRC proposed a new process to assess overall plant performance based on generic and plant-specific safety thresholds and performance indicators. NRC is also reviewing its enforcement process to ensure consistency with the staff’s recommended direction for the assessment process and other programs. In 1997 and 1998, we noted that NRC’s process to focus attention on plants with declining safety performance needed substantial revisions to achieve its purpose as an early warning tool and that NRC did not consistently apply the process across the industry. We also noted that this inconsistency has been attributed, in part, to the lack of specific criteria, the subjective nature of the process, and the confusion of some NRC managers about their role in the process. NRC acknowledged that it should do a better job of identifying plants deserving increased regulatory attention and said that it was developing a new process that would be predictable, nonredundant, efficient, and risk-informed. In January 1999, NRC proposed a new plant assessment process that includes seven “cornerstones.” For each cornerstone, NRC will identify the desired result, important attributes that contribute to achieving the desired result, areas to be measured, and the various ways that exist to measure the identified areas. Three issues cut across the seven cornerstones: human performance, safety conscious work environment, and problem identification and resolution. As proposed, NRC’s plant assessment process would use performance indicators, inspection results, other such information as utility self-assessments, and clearly defined, objective decision thresholds. The process is anchored in a number of principles, including that: (1) a level of safety performance exists that could warrant decreased NRC oversight, (2) performance thresholds should be set high enough to permit NRC to arrest declining performance, (3) NRC must assess both performance indicators and inspection findings, and (4) NRC will establish a minimum level of inspections for all plants (regardless of performance). Although some performance indicators would be generic to the industry, others would be plant-specific based, in part, on the results that utilities derive from their risk assessments. However, the quality of risk assessments and number of staff devoted to maintain them vary considerably among utilities. NRC expects to use a phased approach to implement the revised plant assessment process. Beginning in June 1999, NRC expects to pilot test the use of risk-informed performance indicators at eight plants, by January 2000 to fully implement the process, and by June 2001 to complete an evaluation and propose any adjustments or modifications needed. Between January 1999 and January 2001, NRC expects to work with the industry and other stakeholders to develop a comprehensive set of performance indicators to more directly assess plant performance relative to the cornerstones. For those cornerstones or aspects of cornerstones where it is impractical or impossible to develop performance indicators, NRC would use its inspections and utilities’ self assessments to reach a conclusion about plant performance. NRC’s proposed process illustrates an effort by the current Chairman and other Commissioners to improve NRC’s ability to help ensure safe operations of the nation’s nuclear plants as well as address industry concerns regarding excessive regulation. NRC’s ensuring consistent implementation of the process ultimately established would further illustrate the Commissioners’ commitment. NRC has revised its enforcement policy more than 30 times since its implementation in 1980. Although NRC has attempted to make the policy more equitable, the industry has had longstanding problems with it. Specifically, NEI believes that the policy is not safety-related, timely, or objective. Among the more contentious issues are NRC’s practice of aggregating lesser violations into an enforcement action that results in civil penalties and its use of the term “regulatory significance.” To facilitate a discussion about the enforcement program, including the use of regulatory significance and the practice of aggregating lesser violations, at NRC’s request, NEI and the Union of Concerned Scientists reviewed 56 enforcement actions taken by the agency during fiscal year 1998. For example, NEI reviewed the escalated enforcement actions based on specific criteria, such as whether the violation that resulted in an enforcement action could cause an offsite release of radiation, onsite or offsite radiation exposures, or core damage. From an overall perspective, the Union concluded that NRC’s actions are neither consistent nor repeatable and that the enforcement actions did not always reflect the severity of the offense. According to NRC staff, they plan to meet with various stakeholders in January and February 1999 to discuss issues related to the enforcement program. Another issue is the use of the term “regulatory significance” by NRC inspectors. NRC, according to NEI and the Union of Concerned Scientists, uses “regulatory significance” when inspectors cannot define the safety significance of violations. However, when the use of regulatory significance results in financial penalties, neither NRC nor the utility can explain to the public the reasons for the violation. As a result, the public cannot determine whether the violation presented a safety concern. NEI has proposed a revised enforcement process. NRC is reviewing the proposal as well as other changes to the enforcement process to ensure consistency with the draft plant safety assessment process and other changes being proposed as NRC moves to risk-informed regulation. NRC’s memorandum of tasks shows that the staff expect to provide recommendations to the Commission in March 1999 that address the use of the term regulatory significance and in May 1999 on considering risk in the enforcement process. In January 1999, we provided the Congress with our views on the major management challenges that NRC faces. We believe that the management challenges we identified have limited NRC’s effectiveness. In summary, we reported that: NRC lacks assurance that its current regulatory approach ensures safety. NRC assumes that plants are safe if they operate as designed and follow NRC’s regulations. However, NRC’s regulations and other guidance do not define, for either a licensee or the public, the conditions necessary for a plant’s safety; therefore, determining a plant’s safety is subjective. NRC’s oversight has been inadequate and slow. Although NRC’s indicators show that conditions throughout the nuclear energy industry have generally improved, they also show that some nuclear plants are chronically poor performers. At three nuclear plants with long-standing safety problems that we reviewed, NRC did not take aggressive action to ensure that the utilities corrected the problems. As a result of NRC’s inaction, the conditions at the plants worsened, reducing safety margins. NRC’s culture and organizational structure have made the process of addressing concerns with the agency’s regulatory approach slow and ineffective. Since 1979, various reviews have concluded that NRC’s organizational structure, inadequate management control, and inability to oversee itself have impeded its effectiveness. Some of the initiatives that NRC has underway have the potential to address the first two management challenges. However, the need to ensure that NRC’s regulatory programs work as effectively as possible is extremely important, particularly in light of major changes taking place in the electric utility industry and in NRC. Yet changing NRC’s culture will not be easy. In a June 1998 report, the Office of the Inspector General noted that NRC’s staff had a strong commitment to protecting public health and safety. However, the staff expressed high levels of uncertainty and confusion about the new directions in regulatory practices and challenges facing the agency. The employees said that, in their view, they spend too much time on paperwork that may not contribute to NRC’s safety mission. The Inspector General concluded that without significant and meaningful improvement in management’s leadership, employees’ involvement, and communication, NRC’s current climate could eventually erode the employees’ outlook and commitment to doing their job. This climate could also erode NRC’s progress in moving forward with a risk-informed regulatory approach. According to staff, NRC recognizes the need to effectively communicate with its staff and other stakeholders and is developing plans to do so. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the actions the Nuclear Regulatory Commission (NRC) has taken to move from its traditional regulatory approach to an approach that considers risk in conjunction with engineering analyses and operating experience-termed risk-informed regulation, focusing on the: (1) issues that NRC needs to resolve to implement a risk-informed regulatory approach; (2) status of NRC's efforts to make two of its oversight programs--overall plant safety assessments and enforcement-risk-informed; and (3) major management challenges that NRC faces. GAO noted that: (1) since July 1998, NRC has accelerated some activities needed to implement a risk-informed regulatory approach and has established and set milestones for others; (2) however, NRC has not resolved the most basic of issues; (3) that is, that some utilities do not have current and accurate design information for their nuclear power plants, which is needed for a risk-informed approach; (4) also, neither NRC nor the nuclear utility industry have standards or guidance that define the quality or adequacy of the risk assessments that utilities use to identify and measure the risks to public health and the environment; (5) furthermore, NRC has not determined if compliance with risk-informed regulations will be voluntary or mandatory for the nuclear utility industry; (6) more fundamentally, NRC has not developed a comprehensive strategy that would move its regulation of the safety of nuclear power plants from its traditional approach to an approach that considers risk; (7) in January 1999, NRC released for comment a proposed process to assess the overall safety of nuclear power plants; (8) the process would establish generic and plant-specific safety thresholds and indicators to help NRC assess overall plant safety; (9) NRC expects to phase in the new process over the next 2 years and evaluate it by June 2001, at which time NRC would propose any adjustments or modifications needed; (10) in addition, NRC has been examining the changes needed to its enforcement program to make it consistent with, among other things, the proposed plant safety assessment process; (11) for many years, the nuclear industry and public interest groups have criticized the enforcement program as subjective; (12) in the spring of 1999, NRC staff expect to provide the Commission recommendations for revising the enforcement program; (13) in January 1999, GAO identified major management challenges that limit NRC's effectiveness; (14) the challenges include the lack of a definition of safety and lack of aggressiveness in requiring utilities to comply with safety regulations; and (15) NRC's revised plant safety assessment and enforcement initiatives may ultimately help the agency address these management challenges and carry out its safety mission more effectively and efficiently.
The federal High Performance Computing and Communications (HPCC) program began in fiscal year 1992 as a joint effort among nine federal agencies to significantly accelerate the availability and utilization of the next generation of high performance computers and networks. The overall goals of the program are to extend U.S. technological leadership in high performance computing and provide wide dissemination and application of the technologies to speed the pace of innovation and to improve national economic competitiveness, national security, education, health care, and the global environment; and provide key parts of the foundation for the national information infrastructure (NII) and demonstrate selected NII applications. Four agencies—the Advanced Research Projects Agency (ARPA), the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF)—developed the original program plan for HPCC in 1989, and they remain the program’s dominant participants. In fiscal year 1995 these agencies together will spend more than $900 million, or 81 percent of the official budget. Led by the White House Office of Science and Technology Policy (OSTP), programs at each of these agencies were drawn together to form the governmentwide HPCC program. Ten federal agencies currently participate in the HPCC program. OSTP has designated the National Science and Technology Council (NSTC) to oversee the HPCC program, through its Committee on Information and Communications. The NSTC, a cabinet-level organization created by the President in November 1993, is intended to serve, in part, as a mechanism for coordinating research and development strategies across the government and for monitoring agency research and development spending plans. Since the NSTC has only recently been established, it is still too early to gauge its impact on the HPCC program. Table 1.1 presents an overview of reported HPCC spending to date and budgeted amounts for fiscal year 1995 by participating agency. The 1989 program plan laid out the original framework and parameters for the government’s investment in HPCC, proposing that the program grow in even increments from a base of approximately $500 million to approximately $1.1 billion in its fifth year. In budgeting for the actual program, HPCC managers have adhered closely to these original targets. Spending is anticipated to continue at over $1 billion annually until 1998. To date, the HPCC program and its predecessor agency programs have been highly successful. Participating agencies have been instrumental in establishing more than a dozen high performance computing research centers throughout the U.S. Efforts to provide nationwide access to these centers through interconnected high-speed data networks have led to dramatic increases in the use of those networks. The computing research centers and networks have, in turn, allowed scientists to make significant advances in addressing the highly complex, scientific problems that are collectively referred to as “grand challenges.” Grand challenges include such problems as understanding global climate change, analyzing nuclear reactions, and mapping the human genetic structure. In September 1992, OSTP established a National Coordination Office (NCO) to coordinate the activities of the agencies participating in HPCC and to serve as liaison to Congress, industry, academia, and the public. The office’s director serves part-time; this individual is also director of the National Library of Medicine. The NCO provides administrative support, disseminates information, and chairs coordination meetings attended by officials of the participating agencies. The NCO does not assess agency HPCC programs or provide guidance to the agencies on their programs. It also does not review or have approval authority regarding agency HPCC budgets. Since HPCC is structured as a consortium of federal research agencies with independent programs and budgets, participating agencies can—and do—have widely varying approaches to research and development. The programs of the four major participants reveal the diversity of these agencies’ approaches. ARPA and NSF—the major participants in terms of expenditures—are quite different from NASA and DOE. ARPA and NSF concentrate more heavily on basic research, although all four agencies fund scientists working on practical applications of HPCC technologies. ARPA has been at the forefront of research into critical technologies, such as computer time-sharing, computer graphics, computer networks, and artificial intelligence, for many years. The agency has had a high performance computing program since the early 1980s. ARPA funds some 200 or more HPCC projects, most of which are relatively small-scale efforts costing between $100,000 and $500,000. Having no laboratories or centers of its own, ARPA funds projects that are run half by academic researchers and half by industry and other government researchers. It also funds the placement of HPCC computers and networks at research sites for use on a variety of research problems. NSF, like ARPA, funds a large number of relatively small-scale research projects in a wide range of scientific disciplines. NSF also is similar to ARPA in providing HPCC computing and communications infrastructure for a range of research uses. NSF does this by providing base funding for four national supercomputer centers that, in turn, support research in a range of disciplines, such as biotechnology, global change studies, and manufacturing design. NASA and DOE, in contrast to ARPA and NSF, are involved in HPCC primarily because of the potential for HPCC technology to enhance their ability to carry out agency missions. NASA’s projects, for example, are all linked to either (1) design and simulation of aerospace vehicles or (2) earth and space sciences research. Rather than investing heavily in research to design new computer architectures and build new systems, NASA concentrates on the use and evaluation of HPCC systems in the context of its mission needs. DOE similarly emphasizes the role of being an early user of advanced systems and providing feedback to the systems’ developers, rather than attempting to develop new system architectures on its own. Both NASA and DOE have laboratories and centers with extensive HPCC resources. Much of their HPCC funding goes to projects at these sites. In February 1993, the new administration issued a document outlining its strategy for investing in advanced technology. In the document, the administration rejected the traditional approach of limiting the federal government’s technology development spending to support of basic science and mission-oriented research in the Department of Defense (DOD), NASA, and other agencies. The document stated that challenges facing the U.S. were too profound to rely on the government’s investments in defense and space technology to trickle down to the private sector. Instead, it called for direct support of private sector technology development efforts. In keeping with this new thinking, the administration sought to align the HPCC program more closely with broader applications that could be developed and commercialized in the private sector. Specifically, HPCC was linked to the development of a national information infrastructure (NII). OSTP envisions the NII, which is commonly referred to as the “information superhighway,” as a nationwide infrastructure of high performance computing hardware and massive computer databases, all linked together by high-speed communications networks and new software that allows trained users to access and use the information contained therein. The HPCC program’s technology support for the NII is contained in a new program component added for fiscal year 1994, called Information Infrastructure Technology and Applications (IITA). The new component is intended to (1) develop the technology base for the NII and (2) work with industry in using this technology to develop and demonstrate new applications for the NII. The IITA component is also expected to broaden the market for HPCC technologies and accelerate industry development of the NII. In addition to the new IITA component, the HPCC program includes efforts undertaken in four other broad areas described below. The High Performance Computing Systems component concentrates on the development of the underlying technology required to build scalable,parallel computer systems capable of sustaining trillions of operations per second on large problems. Most traditional computers have one computational processor, and traditional computer development has focused on making this processor faster and more efficient. However, the potential for continued increases in speed is reaching the limits imposed by the physical properties of the materials used to build the processor. Consequently, an entirely new kind of computer design is needed if speed and performance improvements are to continue. Computer scientists see development of parallel processing systems as the only way to achieve the dramatic improvements in computer speed that will be needed to address large, complex scientific problems. Parallel processing means breaking computational problems into many separate parts and having a large number of processors tackle those parts simultaneously. Greatly increased processing speed is achieved largely through the sheer number of processors operating simultaneously, rather than through any exceptional power in each processor. Massively parallel processing (MPP) refers to large machines that include many cooperating processors. Other approaches to parallel processing include clustering large numbers of independent workstations together or developing ways to link together a number of completely different computer systems to address a single complex problem in parallel. The primary justification for developing increasingly more powerful parallel computer systems is to address the large, complex scientific problems, commonly known as the “grand challenges.” The grand challenges are fundamental problems in science and engineering that require significant increases in computational capability to address, such as predicting global climate change or testing advanced aircraft designs. The Advanced Software Technology and Algorithms component of the HPCC program targets software development to make MPP and other high performance computer systems useful in addressing grand challenges. Radically new system software and software tools are needed to operate MPP and other parallel systems. Most potential users have not yet adopted parallel systems because of the high cost and risk of developing software for their specific applications, and because system software for current parallel systems is still rather primitive. Major workshops on HPCC software convened in 1992 and 1993 agreed that greater focus on research to improve system software and software tools is critical if the HPCC program is to succeed. The National Research and Education Network segment of the program focuses on the development of a national high-speed communications infrastructure to enhance the ability of U.S. researchers and educators to perform collaborative research and education activities, regardless of their physical location or available local computational resources. This segment has two parts: (1) development of an interagency internetwork and (2) gigabit research and development. The interagency internetwork program will upgrade the networks of participating agencies to higher speeds than are currently available and ensure their interconnection. The gigabit research and development program will develop new high-speed communications technologies through basic research and through experimentation with testbed networks located at various sites around the country. The Basic Research and Human Resources segment supports long-term research by individual investigators in scalable high performance computing. It is also intended to increase the pool of trained personnel by enhancing education and training in HPCC. Finally, this segment provides computing and communications resources needed to support these research and education activities. With the recent addition of the IITA component, the HPCC program now targets two kinds of applications as ultimate beneficiaries of the technology being developed in the program. Program managers refer to these two groups of applications as “grand challenges” and “national challenges.” Grand challenges, mentioned above, are aimed primarily at the scientific research community. National challenges, on the other hand, are defined in HPCC program documentation as major societal needs that HPCC technology can address, such as the civil infrastructure, digital libraries, education and lifelong learning, energy management, the environment, health care, manufacturing processes and products, national security, and public access to government information. While grand challenges address complex scientific questions, national challenges involve making use of large stores of data and information to enhance everyday activities. The national challenges are an identified subset of the wide range of potential applications that may be developed for the NII. In April 1993, the House Committee on Armed Services requested that we evaluate the status of the HPCC program. On the basis of subsequent discussions with committee staff, our specific objectives were to assess (1) the effectiveness of the program’s management structure in setting goals and measuring progress and (2) how extensively private industry has been involved in the planning and execution of the program. To meet our objectives, we reviewed official HPCC program documentation of the participating agencies and the NCO. We also reviewed the administration’s statements regarding technology policy and the creation of the National Information Infrastructure. We discussed these issues with government, academic officials, and private industry from a broad range of organizations. Specifically, with regard to the program’s management, we interviewed government officials at Office of Science and Technology Policy, Executive Office of the President, Washington, D.C., National Economic Council, Executive Office of the President, Office of Management and Budget, Executive Office of the President, National Coordination Office for HPCC, National Library of Medicine, ARPA, Computing Systems Technology office, Arlington, Virginia, Department of Energy, Office of Energy Research, Gaithersburg, Maryland, NASA, High Performance Computing and Communications Office, National Science Foundation, Directorate for Computer and Information Science and Engineering, Washington, D.C., National Security Agency, Ft. Meade, Maryland, and National Institutes of Health, Bethesda, Maryland. We also interviewed officials from government laboratories, including Oak Ridge National Laboratory, Oak Ridge, Tennessee, Cornell Theory Center, Ithaca, New York, National Center for Supercomputer Applications, Urbana-Champaign, Pittsburgh Supercomputing Center, Pittsburgh, Pennsylvania, and San Diego Supercomputing Center, San Diego, California. We interviewed members of the academic community from National Research Council, Computer Science and Telecommunications Board, Washington, D.C., Syracuse University, Syracuse, New York, California Institute of Technology, Pasadena, California, Rice University, Houston, Texas, University of Washington, Seattle, Washington, Stanford University, Stanford, California, University of California, Berkeley, California, and University of Colorado, Boulder, Colorado. Regarding industry’s participation in the program, we reviewed reports prepared by industry associations and interviewed representatives of these associations, including Computing Research Association, Washington, D.C., EDUCOM, Washington, D.C., American Electronics Association, Washington, D.C., Information Technology Association of America, Washington, D.C., and Computer Systems Policy Project, Washington, D.C. We also interviewed industry officials representing Electronic Data Systems Corporation, Eastman Kodak Company, Microelectronics and Computer Technology Corporation, Boeing Computer Services, Visual Numerics, Inc., Tera Computer Company, Schlumberger Well Services, General Motors Research Corporation, MasPar Computer Corporation, Silicon Graphics, Inc., Sun Microsystems, Inc., Eli Lilly & Company, Cray Research, Inc. A detailed audit of the funding of the HPCC program was beyond the scope of this review. Accordingly, we did not attempt to determine the appropriateness of funding for any specific HPCC projects or the merits of proposals that have not been funded. However, we did collect budget information from each of the six agencies included in the review in order to assess the program’s management processes for tracking and reporting how funds are spent. We conducted our review from May 1993 to June 1994, in accordance with generally accepted government auditing standards. The Assistant to the President for Science and Technology provided written comments on a draft of this report. These comments are presented, along with our evaluation, in appendix I. The HPCC program is a loosely coordinated group of research and development activities sponsored by a variety of federal agencies. To date, the program’s broad technical goals have been driven by scientists’ need for ever-increasing computer power to address the grand challenges. Now, however, the administration is also counting on the HPCC program to help develop the new technology that will be needed to make the NII successful and to give the nation a competitive economic edge. In order to best ensure that it stays focused on achieving these more immediate goals, the HPCC program could use more explicit management controls. First, the program will need to set more specific, measurable technical goals by developing a prioritized technical agenda. Such a document would serve as a master program plan, identifying and prioritizing specific technical challenges and establishing a framework for managing costs and evaluating results. Second, the program could make HPCC budget and expenditure information more consistent and meaningful across participating agencies to improve public visibility into program funding patterns. As discussed in chapter 1, the HPCC program involves 10 federal agencies that have a wide variety of missions and approaches to research and development. A representative from the White House Office of Science and Technology (OSTP) stated, and researchers we contacted agreed, that this diversity of management approaches is a valuable asset in a research environment because it allows a variety of technical approaches to be explored. In addition, the major participating agencies—ARPA, DOE, NASA, and NSF—had conducted successful research and development programs related to HPCC for some years prior to the establishment of the joint program. As such, the OSTP representative stated that designating a strong central manager for the HPCC program would not be appropriate and that it would be disruptive to the programs to impose outside control over them. Instead of taking a centralized approach, the OSTP representative said participating agencies should be seen as members of a consortium, each pursuing their own objectives but coordinating their efforts. The agency program managers are members of a committee called the High Performance Computing, Communications, and Information Technology (HPCCIT) committee. This committee, which is chaired by the NCO, meets on a monthly basis to coordinate HPCC activities. A number of researchers told us that, to date, this arrangement has worked reasonably well. HPCC managers are generally given high marks by researchers for sharing information and coordinating their activities. Figure 2.1 shows the organization of the HPCC program. The program originally operated under the assumption that the advances it pioneers in high performance computing would eventually work their way down to widespread use for everyday activities throughout the private sector. Indeed, much computer research funded by ARPA and DOD in the past for military applications has been the foundation for technology widely used today in personal computers and communications networks. However, the administration now argues that the challenges facing the U.S. are too profound to rely on the government’s investments in defense and space technology to trickle down to the private sector. The administration intends the HPCC program to play a key role in a more focused approach to stimulating commercial development and application of new technologies. Measuring progress within the program remains an informal process. In October 1992, OSTP established guidelines for the formal ongoing evaluation of federal research programs. Although the guidelines require that a program such as HPCC submit a plan for continual and thorough evaluation of progress and outcomes, no such plan has yet been prepared. Potential NII applications will require specific new technologies that the HPCC program has not yet identified and prioritized as technical goals. For example, users of the NII will need to access and manipulate databases of information that are much larger than can be handled efficiently by today’s systems. Although some large database technology research is going on within the HPCC program, no determination has yet been made about whether it is a priority area that should be emphasized. Outside commentators on the HPCC program have proposed a range of specific technology areas, such as this, that could be targeted as a way of accelerating development of the NII. Rather than targeting specific technology areas for accelerated development, the HPCC program has pursued research in many different aspects of advanced parallel computing. The program has had two broad technical goals, which it originally set out to achieve by 1996. One is to gain a thousand-fold improvement in useful computing capability and the other is to achieve a hundredfold improvement in computer communications capability. The program has aimed to address the full spectrum of hardware, software, networking, and training issues associated with developing this radically new breed of parallel computers. Although much faster computers and networks are certainly a basic need, particularly to enable scientists to address grand challenge problems, these goals are all-encompassing and do not give enough technical focus to the program. Because they are so broad, controversy and confusion have sometimes arisen as to what the “real” goals of the program are. For example, university and industry experts have observed that, in its original form, the program appeared to be concentrating heavily on developing new hardware architectures, with relatively little attention being paid to software issues, thus leaving systems difficult to use. More recently, the addition of the NII-oriented IITA component has further broadened the technology spectrum to be addressed by HPCC. Both participants and outside observers have questioned the extent to which the program is actually shifting its emphasis toward NII technology issues, given that the level of funding for IITA projects to develop applications in areas such as education and health care is minimal compared with funding for hardware systems development. No official prioritization has yet been made. The program’s annual report to the Congress describes ongoing work in a number of technical areas, but does not prioritize among competing technical goals. For example, the annual report states that the five broad component areas of the program are considered equally important. Within the new IITA component, the document identifies a range of technologies that will be needed for the NII, but does not prioritize them or offer an overall strategy for developing them. An explicit technical agenda, identifying and prioritizing specific technology challenges and establishing a framework of expected costs and results, could go a long way toward better defining the program’s direction. This agenda could also provide the needed management framework for focusing on technologies in support of the NII. Although it could take a variety of forms, an official technical agenda would specify a target amount of resources to be invested in each priority area and the major results that are expected. Subject to periodic review and adjustment, this document would clarify the program’s goals and objectives, focus efforts on critical areas, and serve as a baseline for measuring program progress and results. One potential model for identifying and prioritizing technology challenges is a draft prepared by the Computer Systems Policy Project, an affiliation of American computer companies that have an interest in the national information infrastructure. The document identifies nine technology areas that will be critical to the success of the NII. For each of these nine areas, the document lists a number of specific technologies that need to be researched and developed and suggests which of these should receive priority attention. Budgets and expenditures for HPCC activities, both inside and outside the program, have not been accounted for in a uniform and easily understood way. Accordingly, it is unclear how much money is actually being spent on advanced computing and communications and on what projects. Spending for the formal HPCC program has closely followed its original plan of expanding in even increments from a $500 million base program to approximately $1.1 billion in its fifth year. However, the program budget, which is often cited publicly as a measure of the federal government’s investment in HPCC, actually offers little insight into how the federal government is investing in total in HPCC research and development. This is because participating agencies have diverse research programs and equally diverse ways of identifying and categorizing their HPCC spending. There are no uniform guidelines for determining what projects to include within the HPCC program or for categorizing those projects within the five major components of the program. According to the official summary documents for the HPCC program that accompany the President’s budget request each year, nearly $3 billion has already been spent on HPCC, and, beginning in fiscal year 1995, annual budgets will top $1 billion. However, these figures do not reflect the total federal investment in HPCC. Several types of research and infrastructure activities are not consistently included or excluded from the program. For example, preexisting government supercomputer centers have sometimes been included in the HPCC program and sometimes not. Four supercomputer centers supported by NSF are included, as is the National Cancer Institute’s center; however, the supercomputer center at the National Center for Atmospheric Research, also funded by NSF, is not included. Similarly, NASA includes some of its supercomputer facilities but not others. In each case, program managers have made their own judgments on what to include under HPCC since no programwide guidelines were available. Advanced computer research that is not directly related to development of scalable parallel computers is another area that is neither clearly within nor clearly excluded from HPCC. NSF includes research into advanced optical computing, for example, whereas ARPA keeps its optical computing research separate from HPCC. NSF’s HPCC program also supports fundamental research in areas such as the theory of computing, software engineering, and the theoretical aspects of computer systems, while ARPA funds this type of research outside the HPCC program. HPCC program documentation uses five component categories to describe the types of research and development that are funded within the program (these five categories are defined in chapter 1). Although this categorization could be helpful in understanding how HPCC funds are spent, its value is diminished by discrepancies in the way agencies categorize their official HPCC spending. Currently, no uniform method for categorizing projects is used. Relying on the personal judgment of HPCC managers and coordinators, participating agencies group similar projects differently within the five program categories. For example, program documentation generally describes High Performance Computing Systems as the hardware component of the program. However, hardware spending also shows up in the Advanced Software Technology and Algorithms and Basic Research and Human Resources categories. The Basic Research and Human Resources component, in particular, overlaps all the other categories, since program managers have to determine whether research is “basic” and then categorize their projects accordingly. Program managers have listed a full spectrum of research activities under this component, from research on architectures and systems, to software, algorithms, and applications. Because of these inconsistent classifications, it is difficult to determine what areas HPCC is really emphasizing—developing hardware platforms, writing systems software and tools, developing software applications, or none of these—or how much effort is being expended on each. Explicit guidelines for preparing HPCC budgets across agencies, which do not currently exist, would afford greater visibility into the overall federal investment and would facilitate more informed assessments of whether appropriate emphasis is being placed on areas that need greatest attention. Such guidelines should include new, more precise budget categories that would provide visibility into how much is to be spent on operating supercomputer centers, placement of computer systems, and other activities that support researchers but may not be research per se. In April 1994, the NCO issued a document providing a detailed analysis of the types of activities that each HPCC agency funds and how much is being spent for them. The new document is a step in the right direction in that it sets a standard format for all participating agencies to use in presenting budget information and presents more detailed information than has been publicly available before. However, the document does not resolve the discrepancies in how various agencies account for their HPCC activities. In addition to increasing visibility into the government’s investment, more open and consistent reporting of HPCC funding could also broaden industry support for the program, because the program’s major interest areas and priorities for funding would be clearer. While continuing to foster basic research to address scientists’ need for ever-increasing computer power to address grand challenge problems, the HPCC program is also taking on the task of developing the specific technologies that will be needed for the NII. In order to be successful at that new task, the program could benefit from a detailed technical agenda, identifying and prioritizing the kinds of technologies it will develop in support of the NII. Such a document would better define the program’s direction and also serve as a baseline for measuring future progress. The budget information annually reported to Congress on HPCC does not provide enough visibility into how much the government is investing in HPCC or what kinds of research and other activities are being funded. Much of the problem is due to the fact that no precise guidelines exist for determining what activities to include within the HPCC program. Also, the program’s five component categories, while useful in describing the program generally, are not helpful in revealing the specific kinds of activities that are being funded. We recommend that the Director of OSTP direct the HPCC program managers, in consultation with industry and academic representatives, to develop an explicit HPCC technical agenda, delineating the program’s overall strategy and setting development priorities for specific technology areas. This document should specify target amounts of resources to be invested in each priority area and the major results that are expected, so that it can be used as a baseline for measuring progress and controlling costs. We also recommend that the Director of OSTP develop, in consultation with the Office of Management and Budget and the Congress, detailed guidelines for preparing HPCC budgets, including guidance on the types of activities to include in the program and how they should be categorized. OSTP may wish to delegate this task to the NSTC Committee on Information and Communications. In his September 1994 comments, the Assistant to the President for Science and Technology (Science Advisor) concurred with our findings that a more focused management approach is appropriate, given the new direction of the HPCC program. He said that this more focused approach will include improved consistency in preparation of HPCC budgets within participating agencies as well as a more detailed and prioritized technical agenda to ensure that the goals of the program are clearly defined and success is clearly measurable. The Science Advisor disagreed with what he perceived as our view that the program be centrally managed and that it have a centrally controlled budget. However, we did not recommend centralizing the program’s management or budget; instead, we discussed the advantages of a coordinated approach as well as the drawbacks of central management. We agree that HPCC program goals can be met within the framework of the existing program structure. However, achieving and sustaining the kind of targeted effort now envisioned for HPCC must begin with the identification of specific technical goals and priorities. These specific goals and priorities, once established, can then form an objective framework for making decisions about the type of activities to be funded within the program and the amount of funding to be allocated for each. In July 1994, a committee of the National Research Council issued an interim report on the HPCC program that raised concerns in many of the same areas that we addressed. The committee, whose study is still ongoing, said it would continue to examine areas such as the potential for developing standard program performance measures for HPCC and the need for greater budget consistency. Close collaboration with industry is essential to ensure that the HPCC program meets its goal of accelerating the development and widespread use of HPCC technologies. While industry has been extensively involved in the actual execution of HPCC projects, as the program moves forward it would benefit from partnerships with key industries that could capitalize on HPCC technologies to create new products and services for the NII. Representatives from a variety of companies with a potential interest in HPCC told us they remain uninvolved in the program for several important reasons. They expressed the belief that the program does not address their needs and interests, largely because HPCC managers have not solicited their input in program planning. Also, the NCO, which was established in part to foster industry participation, has not provided industry representatives with needed information or responded to industry initiatives to improve communications between the program and potential industry participants. Given that the administration sees the HPCC program as playing an important role in developing key technologies for the NII, HPCC managers must more effectively promote industry participation. Since the program’s inception, HPCC program documentation has emphasized that industry participation is critical to meeting the program’s goals of accelerating the development and widespread application of high performance computing and networks. Now, industry’s collaborative role has become even more important in the context of HPCC’s new role of supporting development of the NII. Specifically, the HPCC program is now committed to helping the private sector develop new technologies, including applications and services, that will maximize the value of the NII to a broad base of users. These applications include remote medical diagnosis by specialists and experts anywhere in the nation; the delivery and use of environmental information for a broad range of users, such as agriculture workers and truckers; and enhanced educational opportunities in which students could perform science experiments in collaboration with scientists at the national laboratories or visit museums and research centers without leaving their classrooms. In each case, it is envisioned that these applications will be developed by the private sector, with some level of government support. One goal of government collaboration will be to help ensure that issues of accessibility, security, and reliability are addressed. Industry involvement in the actual execution of the HPCC program has been extensive. At ARPA alone, for example, 43 percent of the HPCC budget goes to companies that have successfully responded to ARPA’s requests for research proposals in specific technological areas. DOE also has established cooperative agreements with numerous partners from industry. Nevertheless, HPCC managers have generally not involved industry in planning the HPCC program. At the governmentwide level, a mechanism for obtaining nonfederal advice and evaluation was mandated by the High Performance Computing Act, which directed the President to establish an advisory committee including representatives from industry. According to OSTP officials, the administration is working to get the advisory committee appointed, although concerns about potential conflict-of-interest have slowed the effort. Many HPCC agencies have their own advisory committees that review their HPCC programs. These committees have been helpful in planning effective agency programs. A case in point is NASA’s program, which was reviewed in 1993 by a NASA Advisory Council Task Force. The task force reported that the priorities in the agency’s HPCC plan did not address the research problems that the aerospace industry considered most critical. NASA responded by soliciting direct industry involvement in reworking its program plan for aerospace. Aerospace industry representatives told us they are encouraged that a revised plan will more fully reflect their interests and concerns. The NCO, which was established in part to serve as a point of entry for industry into the program, disseminates general information about the program as well as funding opportunities. The NCO recently made this information available electronically over the Internet. In addition, the NCO has been involved in numerous liaison activities with industry, academia, and the public. These activities have included meetings, workshops, and conferences. The NCO has also allowed groups of industry representatives to attend certain designated portions of the HPCC program managers’ regular meetings and give brief presentations of their views. Industry representatives whom we contacted agreed that all of these activities are valuable. However, they seek greater opportunities for close collaboration between government and industry in planning program direction. They have proposed that the NCO cooperate in arranging for the HPCC program to participate officially in symposia, in order for industry and academic representatives to meet with program managers to air their views on the direction and priorities of the program. They emphasize that these meetings should provide for a full discussion and consideration of issues of importance to industry, such as how best to invest limited resources. The NCO could exercise this function until a permanent advisory committee, which will maintain a more substantial, ongoing dialogue with program management, is appointed. A major roadblock to broader industry utilization and commercialization of high performance computing technologies is the lack of software and software development tools to take advantage of the power of high performance computers. Currently, only a limited range of applications software is available, and development tools, which are needed to write new applications software, are primitive. Moreover, a lack of standards discourages industry from investing in software development projects that may have a limited market. A greater emphasis by the HPCC program on software could reduce some of the risks for potential industry participants and increase their involvement. HPCC so far has focused on the grand challenges as target applications. While the grand challenges are important scientific problems, they involve only small communities of scientists working in specialized areas. For example, applications developed in NASA’s HPCC program are targeted at aerospace engineers designing and simulating new aircraft. Earth and environmental scientists, likewise, will profit from various HPCC projects supported by NASA, NSF, DOE, and ARPA. As valuable as these lines of effort are, they do not directly address broad areas where HPCC technology can benefit the NII, and industry tends to view them as offering little opportunity for commercialization. One of the most important industry applications of HPCC on the NII will be information processing and management. A core set of generic software for processing, storing, searching, and retrieving multiple data types from very large databases would have a broad range of commercial applications, ranging from health care to banking. For example, software for handling databases of imagery would enable applications as diverse as remote medical consultations or law enforcement. Software development tools, which would make it easier for software companies to design and develop new applications, might offer a particularly good opportunity to leverage government investment in HPCC. A series of reports by groups of HPCC researchers has identified and prioritized the tools that would be needed to facilitate the development of a broader range of applications software. These include debugging tools, memory management tools, and performance analysis tools, all of which would help to create a more productive software environment. The HPCC program already supports some research in these areas. However, by establishing software development as a priority and devoting more resources to it, HPCC would encourage industry to invest in the development of a wide range of specialized NII applications. Developers have identified the lack of standards as an impediment to more intensive commercial development of HPCC applications software. Agreement on standards would permit commercial software developers to build programs that work on a variety of high performance computers, rather than on only one specific hardware system, which may or may not do well in the marketplace. Broadening the base of computers on which the software will run would expand its potential commercial market, thereby allowing developers to put a much greater effort into building applications software. However, setting standards is a difficult process, requiring a great deal of interaction over time within the HPCC community. Industry representatives agree that the government should not set standards. Industry, they believe, must lead this effort. Nevertheless, the government can play a practical role in supporting standards-setting efforts. The HPCC program already provides funding for several standards-setting activities. For example, several agencies support a project to establish a standard HPCC version of the Fortran programming language. However, industry representatives have urged greater government support for standards-setting activities in order to stimulate commercial software development. Specifically, the HPCC program could fund more workshops where government, academia, and industry can come together to discuss and collaborate on emerging standards. The program could also provide more direct support for researchers to work with industry on evaluating potential standards. It is widely recognized that the HPCC program needs a standing advisory committee that includes representatives from a broad range of potential industry participants. Such a committee would provide the mechanism to sustain an ongoing dialogue between the program and industry. However, in addition to establishing this committee, program officials can take additional steps to promote industry involvement, through cosponsoring symposia with industry and involving industry representatives in the program planning process, in order to forge a true partnership between government and industry. We recommend that the Director of OSTP (1) take steps to expedite the appointment of an advisory committee whose membership includes representatives from a wide range of industries, and (2) delegate to the NCO the role of sponsoring symposia where industry can meet with program officials and academia to help define the research priorities of the program. We also recommend that OSTP direct the Director of the NCO to take additional steps to promote industry participation, including involving industry representatives in the program planning process, and providing greater support for software development and standards-setting activities to make it easier for industry to develop applications for deployment on the NII. In his formal comments, the President’s Science Advisor strongly concurred with the recommendation that a private sector advisory committee be established and noted that OSTP was taking the initial steps to do so. The Science Advisor did not comment on our recommendation that the NCO sponsor symposia involving industry, academia, and HPCC program managers. In preliminary discussions on a draft of the report, HPCC program managers maintained that the program has already implemented our recommendation to place greater emphasis on developing software tools and sponsoring standards-setting activities, as documented in the fiscal year 1995 Implementation Plan. We, however, do not agree that a significant shift in emphasis has yet occurred. While the implementation plan recognizes that greater focus on software tools will be required to encourage industry involvement in developing applications, a small percentage of the budget for the advanced software technology and applications component is allocated to this area. We believe that the program could better leverage federal funding by devoting more resources to activities that would make it easier for private industry to develop a broader range of applications. In its interim report, the National Research Council’s HPCC study committee expressed concerns similar to ours. The committee recommended that an HPCC Advisory Council be appointed immediately to provide broad-based, active input to the HPCC program from industry and academia as well as government. The committee also expressed concerns about the need for software development to catch up with advances that have been made in HPCC hardware development.
Pursuant to a congressional request, GAO reviewed the status of the High Performance Computing and Communications (HPCC) program, focusing on: (1) the effectiveness of the program's management structure in setting goals and measuring progress; and (2) how extensively private industry has been involved in program planning and execution. GAO found that: (1) the Administration is broadening the HPCC role in developing new technology in support of the National Information Infrastructure (NII); (2) industry and academic researchers believe that specific technology areas will need to be targeted to develop support for NII; (3) a more focused HPCC management approach could help ensure that program goals are met; (4) a detailed technical agenda will be needed to identify HPCC priority areas and commit resources to them; (5) inconsistent budget information has made tracking HPCC investments difficult, since participating agencies have diverse methods of identifying and categorizing their HPCC spending; (6) industry participation in HPCC is more important now that the Administration has linked HPCC to the planned NII; and (7) industries that could capitalize on HPCC technologies to create new products and services for NII should be better represented among HPCC program participants.
Prior to the fall of 2005, the U.S. stabilization and reconstruction effort in Iraq lacked a clear, comprehensive, and integrated U.S. strategy. State assessments and other U.S. government reports noted that this hindered the implementation of U.S. stabilization and reconstruction plans. A review of the U.S. mission completed in October 2005 found, among other things, that (1) no unified strategic plan existed that effectively integrated U.S. government political, military, and economic efforts; (2) multiple plans in Iraq and Washington have resulted in competing priorities and funding levels not proportional to the needs of overall mission objectives; (3) focused leadership and clear roles are lacking among State, DOD, and other agencies in the field and in Washington, D.C.; and (4) a more realistic assessment of the capacity limitations of Iraqi central and local government is needed. The study made a series of recommendations that led to the creation of the November 2005 NSVI, including (1) creating a single, joint civil-military operational plan to clarify organizational leads; (2) providing better strategic direction and more coordinated engagement with Iraqi government and international donors; (3) establishing three mission teams to address political, security, and economic tasks; and (4) establishing provincial reconstruction teams to engage Iraqi leadership and foster flexible reconstruction, local governance, and “bottom-up” economic development. The study also called for a streamlined interagency support office in Washington, D.C., to assist the mission’s working groups and provide needed institutional memory and continuity. In response, the administration created the NSVI in November 2005 to reorganize U.S. government stabilization and reconstruction efforts around three broad tracks—political, security, and economic—and eight strategic objectives (see table 1). Overall, officials in DOD and State identified seven documents that describe the U.S. government strategy for Iraq in addition to the NSVI. The U.S. government uses these documents to plan, conduct, and track different levels of the U.S. stabilization and reconstruction strategy as follows: National/strategic level: The President and the NSC established the desired end-state, goals and objectives, and the integrated approach incorporated in the NSVI. The May 2004 NSPD 36 made State responsible for all U.S. activities in Iraq through its Chief of Mission in Baghdad (Ambassador), with the exception of U.S. efforts relating to security and military operations, which would be the responsibility of DOD. The directive also continued the U.S. Central Command (CENTCOM) responsibility for all U.S. government efforts to organize, equip, and train Iraqi security forces. MNF-I oversees the effort to rebuild the Iraqi security forces through a subordinate command. The National Strategy for Supporting Iraq (NSSI) serves as a management tool to match and coordinate U.S. stabilization and reconstruction needs and priorities and provides updates on activities associated with each strategic objective. Operational level: The Joint Mission Statement clarified the roles and responsibilities between the Chief of Mission in Baghdad and the Commander of MNF-I and established mission milestones and target dates for their achievement. The August 2004 campaign plan elaborated and refined the original plan for transferring security responsibilities to Iraqi forces. In April 2006, Commander of the MNF-I and the Chief of Mission in Baghdad issued a new classified Joint Campaign Plan incorporating the changes in organization laid out in the NSVI, although some of the annexes to this campaign plan are being reworked and were not available as of May 2006. Implementation and reporting level: Operations Order 05-03 incorporates revised missions and objectives for the Multinational Corps-Iraq (MNC-I), the MNF-I unit responsible for command and control of operations throughout Iraq. This November 2005 order was issued in anticipation of the New Joint Campaign Plan incorporating the NSVI’s new objectives and organizational arrangements, according to DOD officials. The campaign plans and the operations order also established metrics for assessing their progress in achieving MNF-I’s objectives. State’s 2207 reports track mission activity and funding status by mission objective and funding sector. Figure 1 depicts the relationship of the NSVI and the key supporting strategy documents. In addition to these documents, senior State officials stated that Congressional Budget Justifications and publications on Iraq spending provide additional details on the U.S. government resources, investments, and risk management. DOD officials stated that its quarterly reports to measure the results of its fiscal year 2005 Iraq Security and Stabilization Fund programs in Iraq also provide information, but DOD did not cite these reports as supporting documentation for the NSVI. The NSVI, issued by the NSC in November 2005, incorporates the same desired end-state for U.S. operations in Iraq that first was established by the Coalition Provisional Authority (CPA) in 2003: a peaceful, united, stable, secure Iraq, well integrated into the international community, and a full partner in the global war on terrorism. Since then, however, the strategy’s underlying security, reconstruction, and economic assumptions have changed in response to changing circumstances (see fig. 2). First, the original plan assumed a permissive security environment that never materialized. Second, the CPA assumed that U.S. funded reconstruction activities would help restore Iraq’s essential services to prewar levels but has failed to achieve these goals. Third, the strategy assumes that the international community and Iraqi government will help finance Iraq’s development needs; however, these expectations have not yet been met. As a result, it is unclear how the United States will achieve its desired end-state in Iraq given these changes in assumptions and circumstances. According to senior CPA and State officials, in 2003 the CPA assumed that Iraq would have a permissive security environment. CPA expected that a relatively small internal security force would replace the disbanded Iraqi Army and would quickly assume responsibility for providing security from the coalition forces. However, growing insurgent attacks led to (1) the collapse of Iraqi forces in April 2004; (2) the delay of coalition plans to turn responsibility for security over to the new Iraqi security forces beginning in early 2004; and (3) the postponement of plans to draw down U.S. troop levels below 138,000 until the end of 2005. In October 2004, State reported to Congress that the uncertain security situation affected all potential economic and political developments in Iraq and that enhanced Iraqi security forces were critically needed to meet the new threat environment. The coalition’s military commander and the U.S. Chief of Mission conducted strategic and programmatic reviews in mid-2004 and reached similar conclusions, noting that the hostile security situation required the creation of substantially larger Iraqi security forces with coalition assistance. As a result, between 2003 and 2006, the projected Iraq security force structure doubled in size, while U.S. appropriations for support of the Iraqi security forces more than quadrupled. CPA projected the need for a security force of about 162,000 personnel (including about 77,000 armed forces and National Guard troops and 85,000 police) in 2003. Current plans call for 325,500 security personnel to be organized under coalition direction: including completing the initial training and equipping of the 137,500 in the Iraqi Armed Forces and 188,000 police and other interior ministry forces by the end of December 2006. U.S. assistance appropriated for Iraqi security forces and law enforcement has grown from $3.24 billion in January 2004 to approximately $13.7 billion in June 2006. As GAO recently reported, the insurgency remains strong and resilient in 2005 and early 2006, the intensity and lethality of attacks have been growing, and the insurgency threatens to undermine the development of effective Iraqi governmental institutions. The U.S. strategy initially assumed that its U.S.-funded reconstruction activities would help restore Iraq’s essential services—including oil production, electricity generation, and water treatment—to prewar levels. However, the U.S. efforts have yet to restore Iraq’s essential services to prewar levels, and efforts to achieve these goals have been hindered by security, management, and maintenance challenges. As a result, the United States has yet to prove that it has made a difference in the Iraqi people’s quality of life. According to senior CPA and State officials responsible for the strategy, the CPA’s 2003 reconstruction plan assumed (1) that creating or restoring basic essential services for the Iraqi people took priority over jobs creation and the economy and (2) that the United States should focus its resources on long-term infrastructure reconstruction projects because of the expertise the United States could provide. According to the senior CPA official tasked with developing the reconstruction plan, CPA drew up a prioritized list of more than 2,300 construction projects in 10 sectors to be completed in about 3 years, which were to be funded by the $18.4 billion made available in the fiscal year 2004 supplemental appropriation for the 2004 Iraq Relief and Reconstruction Fund (IRRF2). The U.S. reconstruction effort focused primarily on building or restoring essential services to prewar levels—or to a standard acceptable to and accessible by all Iraqi citizens–over the long-term with less emphasis on more immediate development tasks. CPA initially allocated about two-thirds of the IRRF2 funds to restore essential services in the oil, water, and electricity sectors, while more immediate projects in democracy building, private sector development, and the employment sector received about 3 percent. However, the coalition’s decision in November 2003 to accelerate the return of power to a sovereign Iraqi interim government and changes in the security situation altered these assumptions, leading the U.S. administration to reallocate a total of $3.5 billion between January 2004 and April 2006 from the water resources and sanitation and electric sectors to security, law enforcement, justice, and democracy building and employment programs. For example, the mission reallocated over $555 million in IRRF2 funds to democracy programs and reallocated $105 million to improve productivity and employment in the agriculture sector to support the Iraqi government as it prepared for elections. A World Bank report stated that the agriculture sector employed 18 percent of Iraq’s labor force and accounted for about 10 percent of gross domestic product in 2004. Before this time, the United States had devoted no IRRF2 resources to the agricultural sector. U.S. expectations about Iraq’s capacity to manage and sustain its own reconstruction efforts have not been realized and have resulted in greater U.S. emphasis on capacity development. As reported in prior GAO reports, the U.S. reconstruction effort has encountered difficulties in maintaining new and rehabilitated infrastructure, resulting in some U.S.-funded projects becoming damaged or inoperable after being turned over to the Iraqis. For example, as of June 2005, U.S.-funded water and sanitation projects representing about $52 million of approximately $200 million spent on completed projects were inoperable or were operating at lower than normal capacity. Recent U.S. mission assessments have noted the Iraqi government’s limited capacity to provide services to the Iraqi people due to weak technical expertise, limitations in managers’ skills and training, and an inability to identify and articulate strategic priorities, and other factors. As a result, the administration reallocated $170 million for government capacity building programs and $133 million for infrastructure operations and maintenance needs in 2005 and early 2006. As GAO has reported previously, these challenges contributed to the cancellation or delay of projects in the essential services sectors, affecting U.S. efforts to achieve its targets in the oil, electricity, and water sectors, and undermining its efforts to improve the quality of life for the Iraqi people. A March 2006 poll of Iraqi citizens indicated that over half the respondents thought Iraq was heading in the wrong direction. Moreover, the poll reports that over the last year, growing numbers believe that the security situation, the provision of electricity, the prevalence of corruption, and the state of the economy worsened. From the outset of the reconstruction and stabilization effort, the U.S. strategy assumed that the Iraqis and the international community would help finance Iraq’s developmental needs. However, these expectations have not yet been met, and Iraq’s estimated future reconstruction needs vastly exceed what has been offered to date. According to a CPA report and senior CPA and State officials, the 2003 CPA plan assumed that the Iraqis and the international community would support development needs that were not financed by the United States. For example, a CPA report assumed that Iraqi oil revenues could help pay for reconstruction costs because it estimated that Iraq’s oil production would increase to about 2.8 to 3.0 million barrels per day (mbpd) by the end of 2004, a one-third increase over 2002 levels, and generate about $15 billion in oil export revenue for the year. These expectations about Iraq’s ability to contribute to and manage its own reconstruction have not been realized in practice. U.S. agency documents estimated Iraq’s 2003 actual prewar crude oil production at 2.6 mbpd. In March 2006, State reported that oil production was about 2 mbpd. A combination of insurgent attacks on crude oil and product pipelines, dilapidated infrastructure, and poor operations and maintenance have hindered domestic refining and have required Iraq to import significant portions of liquefied petroleum gas, gasoline, kerosene, and diesel. In addition, although the capacity for export is theoretically as high as 2.5 mbpd, export levels averaged about 1.4 mbpd in 2005. Shortfalls in expected oil production levels and increased security spending contributed to reductions in Iraq’s own projections of how much of the budget would be available to contribute to its own reconstruction. In 2005, Iraq’s government budgeted approximately $5 billion for capital expenditures, but a senior U.S. mission official stated that they managed to spend only a few hundred million by the end of the year. He attributed this to Iraq ministries’ lack of expertise to manage projects, write contracts, and provide effective controls on the contracting process. The strategy’s assumptions about the need for extensive international donor support for rebuilding Iraq’s reconstruction have not significantly changed since 2003, although the estimated cost of restoring Iraq’s infrastructure has grown significantly since October 2003. At that time, a World Bank, United Nations, and CPA assessment initially estimated that it would cost about $56 billion to meet reconstruction needs across a variety of sectors in Iraq. The United States committed about $24 billion for relief and reconstruction in fiscal years 2003 and 2004, with the expectation the Iraqis and the international community would provide the rest. Other foreign donors pledged about $13.6 billion to rebuild Iraq. According to State documents, international donors have provided over $3.5 billion in the form of multilateral and bilateral grants as of April 2006. About $10 billion, or 70 percent, of the pledged amount is in the form of loans, primarily from the World Bank, the International Monetary Fund (IMF), and Japan. As GAO has reported previously, however, Iraq currently owes a combined $84 billion to victims of its invasion of Kuwait and other external creditors, which may limit its capacity to assume more debt. Moreover, Iraq's needs are greater than originally anticipated due to severely degraded infrastructure, postconflict looting and sabotage, and additional security costs. In the oil sector alone, Iraq will now likely need an estimated $30 billion over the next several years to reach and sustain an oil production capacity of 5 million barrels per day, according to industry experts and U.S. officials. For the electricity sector, Iraq projects that it will need $20 billion through 2010 to boost electrical capacity, according to the Department of Energy’s Energy Information Administration. While the NSVI does not identify the magnitude of additional financing needed, it acknowledges that there is “room for the international community to do more.” The NSVI aims to improve U.S. strategic planning for Iraq; however, the NSVI and its supporting documents are incomplete because they do not fully address the six desirable characteristics of effective national strategies that GAO has identified through its prior work. We used these six characteristics to evaluate the NSVI and the supporting documents that DOD and State officials said encompassed the U.S. strategy for rebuilding and stabilizing Iraq. As figure 3 shows, the strategy generally addresses three of the six characteristics but only partially addresses three others, limiting its usefulness to guide agency implementation efforts and achieve desired results. Moreover, since the strategy is dispersed among several documents instead of one, its effectiveness as a planning tool for implementing agencies and for informing Congress about the pace, costs, and intended results of these efforts is limited. The strategy generally addresses three of the six characteristics. As figure 3 shows, the strategy provides: (1) a clear statement of its purpose and scope; (2) a detailed discussion of the problems the strategy intends to address; and (3) an explanation of its goals, subordinate objectives, and activities. This characteristic addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. A complete description of purpose, scope, and methodology make the document more useful to organizations responsible for implementing the strategies, as well as to oversight organizations such as Congress. The strategy identifies U.S. involvement in Iraq as a vital national interest, identifies the risks and threats facing coalition forces, and discusses overarching U.S. political, security, and economic objectives. Specifically, the NSVI identifies U.S. government efforts to rebuild and stabilize Iraq in terms of three overarching political, security, and economic objectives and addresses the assumptions that guided its development. For example, to help Iraq achieve the strategic goal of forging a national compact for democratic government, the strategy’s subordinate objectives state that the United States would help promote transparency in the executive, legislative, and judicial branches of government, and help build national institutions that transcend regional and sectarian interests, among other activities. To help achieve another strategic goal, building government capacity and providing essential services, the strategy also states that the U.S. government is helping to achieve this objective by rehabilitating critical infrastructure in the fuel and electric power sectors. It is also rehabilitating schools, providing new textbooks, computers, and materials; and training teachers and school administrative staff. One supporting document, State’s 2207 report to Congress, provides additional supporting details and data for the specific activities and projects funded through the $18.4 billion in fiscal year 2004 reconstruction funds. This characteristic addresses the particular risks and threats the strategy is directed at, as well as risk assessment of the threats to and vulnerabilities of critical assets and operations. Specific information on both risks and threats helps responsible parties better implement the strategy by ensuring that priorities are clear and focused on the greatest needs. The NSVI and the supporting documents generally address some of the problems, risks, and threats found in Iraq. For example, the NSVI identifies the risks posed by the insurgency and identifies three basic types of insurgents— rejectionists, supporters of former Iraqi President Saddam Hussein, and terrorists affiliated with or inspired by al Qaeda—and the different actions needed to confront each one. In addition, various supporting documents provide additional information on the threats the Shi’ite militias present, and the corruption that could affect the Iraqi government’s ability to become self-reliant, deliver essential services, reform its economy, strengthen rule of law, maintain nonsectarian political institutions, and increase international support. This characteristic addresses what the national strategy strives to achieve and the steps needed to garner those results, as well as the priorities, milestones, and outcome-related performance measures to gauge results. Identifying goals, objectives, and outcome-related performance measures aids implementing parties in achieving results and enables more effective oversight and accountability. In addition, identifying and measuring outcome-related performance rather than output measures allow for more accurate measurement of program results and assessment of program effectiveness. The strategy generally addresses goals and subordinate objectives by identifying 8 strategic objectives (pillars), 46 subordinate objectives, or “lines of action,” and numerous project activities but only partially addresses outcome-related performance measures. The supporting strategy documents also provide information on how progress will be monitored and reported. In addition, the NSVI identifies the process for monitoring and reporting on progress via interagency working groups. It also identifies some metrics to assess progress, such as the number of Iraqis willing to participate in the political process, the quality and quantity of the Iraqi units trained, and barrels of oil produced and exported. The NSVI also notes that detailed metrics on the results of training Iraqi security forces and improvements in the economy and infrastructure are collected and available elsewhere but did not include them in the strategy. Supporting documents also identify some performance measures. The metrics the strategy uses to report progress make it difficult to determine the impact of the U.S. reconstruction effort. We reported previously that in the water resources and sanitation sector little was known about how U.S. efforts were improving the amount and quality of water reaching Iraqi households or their access to the sanitation services because the U.S. government only tracked the number of projects completed or under way. For instance, as of March 2006, Iraq has the capacity to produce 1.1 million cubic meters of water per day, but this level overestimates the amount of potable water reaching Iraqi households. U.S. officials estimate that 60 percent of water treatment output is lost due to leakage, contamination, and illegal connections. The U.S. mission reported in December 2005 that it had developed a set of metrics to better estimate the potential impact of U.S. water and sanitation reconstruction efforts on Iraqi households, but acknowledges it is impossible to measure how much water Iraqis are actually receiving or whether the water is potable. The report notes that without the comprehensive data these key measures would provide, mission efforts to accurately assess the impact of U.S. reconstruction efforts on water and sanitation services are seriously limited. In April 2006, we reported that in the electric sector U.S. agencies primarily reported on generation measures such as levels of added or restored generation capacity and daily power generation of electricity; numbers of projects completed; and average daily hours of power. However, these data did not show (1) whether the power generated was uninterrupted for the period specified (e.g., average number hours per day), (2) if there were regional or geographic differences in the quantity of power generated, or (3) how much power was reaching intended users. Moreover, State’s 2005 assessment of its reconstruction effort noted that the reconstruction effort lacked measurable milestones that tied short-term program objectives to long-term strategic goals. As figure 3 shows, the NSVI and supporting documents only partially (1) identify what the strategy will cost and the sources of financing; (2) delineate the roles and responsibilities of key U.S. government agencies, and the mechanisms for coordination; (3) describe how the strategy will be integrated among U.S. entities, the Iraqi government, and international organizations. This characteristic addresses what the strategy will cost; where resources will be targeted to achieve the end-state; and how the strategy balances benefits, risks, and costs. Guidance on costs and resources needed using a risk management approach helps implementing parties allocate resources according to priorities; track costs and performance; and shift resources, as appropriate. Such guidance also would assist Congress and the administration in developing a more effective strategy to achieve the desired end-state. The strategy neither identifies the current and future costs of implementing the strategy, nor does it identify the sources of funding (U.S. government, international donors, or Iraqi government) needed to achieve U.S. political, security, and economic objectives in Iraq. These costs would include the costs of maintaining U.S. military operations, including the costs to repair and replace equipment used during these operations, building the capacity of key national ministries and the 18 provincial governments, completing the U.S. program for training and equipping Iraqi security forces, and restoring essential services. For example, between fiscal years 2003 and 2006, about $311 billion was allocated to support U.S. objectives in Iraq. Approximately $276 billion has been provided to support U.S. military operations and forces as of June 2006, which currently number about 130,000 troops, and over $34 billion to develop capable Iraqi security forces, restore essential services, and rebuild Iraqi institutions. The administration has also requested about $51 billion more for military and reconstruction operations for fiscal year 2007: including $50 billion that the Office of Management and Budget terms “bridge funding” to continue the global war on terrorism in Iraq and Afghanistan and an additional $771 million for reconstruction operations in Iraq. These cost data are not included in the strategy. As a result, neither DOD nor Congress can reliably determine the cost of the war, nor do they have details on how the appropriated funds are being spent or historical data useful in considering future funding needs. Moreover, the strategy states that the war in Iraq yields benefits in the global war on terrorism but does not discuss substantial financial and other costs. In addition, GAO has previously found numerous problems in DOD’s processes for accounting for and reporting cost data for its operations in Iraq, which constitute about 90 percent of estimated total U.S. government costs. Given the current fiscal challenges facing the U.S. government, such an assessment would help clarify the future costs of U.S. involvement in Iraq. The strategy also fails to project future costs and contributions from non- U.S. sources. It does not address the extent to which the Iraqi government will contribute financially to its own rebuilding effort. While supporting documents provide some information on current spending plans and allocations, the dispersion of this budget information across numerous budget documents makes it difficult to analyze how the objectives of the NSVI will be funded. For example, State’s quarterly 2207 reports to Congress describe the current status of the Iraq reconstruction funding allocations and the status of international donations for reconstruction. In February 2006, State issued two supplemental documents that provide some additional information on how IRRF2 funds and fiscal year 2006 and 2007 budget appropriations were to be spent across the NSVI’s three tracks (political, security, and economic). Other supporting documents partially address these resource issues but do not identify future resource needs. The unclassified version of the MNF-I/ U.S. Embassy Baghdad Joint Mission Statement on Iraq indicates that budgetary and human capital resources will be needed, and funding is expected from Congress and the Iraqi government. However, it does not identify the specific amounts needed to meet key U.S. goals. The 2207 reports discuss international donor contribution levels and reports on the progress of projects funded with international grants but does not relate these amounts to Iraqi requirements. In addition, none of the strategy documents takes into account the total cost of Iraq’s reconstruction, which will be more than originally anticipated, due to severely degraded infrastructure, postconflict looting and sabotage, and additional security costs. Initial assessments in 2003 identified a total of $56 billion in Iraqi reconstruction needs in various sectors, but more recent cost estimates suggest that the oil infrastructure and electric sectors alone will require about $50 billion in the next several years. These funding concerns have grown as resources have been shifted from reconstruction projects to security needs. For example, between January 2004 and April 2006, the administration reallocated $3.5 billion from the water resources and sanitation and electric sectors to security; justice, public safety, and civil society; and democracy building activities; and other programs. This contributed to the cancellation, delay, or scaling back of water and electricity projects and will complicate efforts to achieve the objectives for these essential service sectors. Although the NSVI acknowledges that rampant corruption is a challenge threatening the success of U.S. reconstruction and stabilization efforts, the strategy does not address how reconstruction efforts should take the risk of corruption into account when assessing the costs of achieving U.S. objectives in Iraq. For instance, IMF, the World Bank, Japan, and European Union officials cite corruption in the oil sector as an especially serious problem. In addition, according to State officials and reporting documents, about 10 percent of refined fuels are diverted to the black market, and about 30 percent of imported fuels are smuggled out of Iraq and sold for a profit. By not addressing this risk, the strategy cannot provide adequate guidance to implementing parties trying to assess priorities and allocate resources. This characteristic addresses which U.S. organization will implement the strategy and their roles, responsibilities, and mechanisms for coordinating their efforts. Addressing this characteristic fosters coordination and enhances both implementation and accountability. The NSVI and the supporting documents partially address the roles and responsibilities of specific U.S. government agencies and offices and the process for coordination. To organize U.S. efforts in Iraq, the NSVI breaks down the political, security, and economic tracks of the strategy into eight strategic objectives (pillars) that have lines of action assigned to military and civilian units in Iraq. Each strategic objective has a corresponding interagency working group to coordinate policy, review and assess the progress, develop new proposals for action, and oversee implementation of existing policies. National Security Presidential Directive 36 made the Department of State responsible for nonsecurity aspects of reconstruction and lays out key roles for the U.S. Chief of Mission in Baghdad and CENTCOM. It directed that the Commander of CENTCOM will, with the Chief of Mission’s policy guidance, direct all U.S. government efforts in support of training and equipping Iraq security forces. It also established the roles for the mission’s two supporting offices: the Iraq Reconstruction Management Office and the Projects and Contracting Office. Although the NSVI organizes the U.S. strategy along three broad tracks and eight strategic objectives, it does not clearly identify the roles and responsibilities of specific federal agencies for achieving these specific objectives, or how disputes among them will be resolved. For example, GAO found only one reference in the NSVI to the reconstruction responsibilities of a particular U.S. government agency in Iraq when it noted that the Federal Bureau of Investigation and other U.S. agencies would assist an Iraqi antimajor crimes task force in the investigation of terrorist attacks and assassinations. Thus, it is not clear which agency is responsible for implementing the overlapping activities listed under the eight strategic objectives. For instance, one activity is to promote transparency in the executive, legislative, and judicial branches of the Iraqi government; however, the strategy does not indicate which agency is responsible for implementing this activity, or who to hold accountable for results. Moreover, little guidance is provided to assist implementing agencies in resolving conflicts among themselves, as well as with other entities. In our prior work, we found that delays in reconstruction efforts sometimes resulted from lack of agreement among U.S. agencies, contractors, and Iraqi authorities about the scope and schedule for the work to be performed. For example, in the water resources and sanitation sector, Iraqi and U.S. officials’ disagreements over decisions to repair or replace treatment facilities or to use brick instead of concrete have delayed project execution. This characteristic addresses both how a national strategy relates to the goals, objectives, and activities of other strategies, to other government and international entities, and relevant documents from implementing organizations. A clear relationship between the strategy and other critical implementing documents helps agencies and other entities understand their roles and responsibilities, foster effective implementation, and promote accountability. The NSVI and supporting documents partially address how the strategy relates to other international donors and Iraqi government goals, objectives, and activities. For instance, the NSVI and supporting documents identify the need to integrate the efforts of the coalition, the Iraqi government, and other nations but do not discuss how the U.S. goals and objectives are integrated with the strategies, goals, and objectives of the international donors and the Iraqi government. The NSVI does identify Web sites where other documents can be obtained but does not address how these documents are integrated with the NSVI. GAO has previously reported that victory in Iraq cannot be achieved without an integrated U.S., international, and Iraqi effort to meet the political, security, and economic needs of the Iraqi people. However, the strategy has only partially addressed how it relates to the objectives and activities of Iraq and the international community and does not address what it expects the international community or the Iraqi government to pay to achieve future objectives. This affects the strategy’s ability to address the challenge of conducting an integrated operation dependent upon Iraq’s limited capacity to contribute to its own reconstruction. For example, GAO has reported that Iraq’s weak national and provincial governments limit Iraq’s ability to operate and sustain new and rehabilitated infrastructure projects. This has contributed to the failure to achieve key reconstruction goals. The dispersion of information across several documents limits the strategy’s overall coherence and effectiveness as a management tool for implementing agencies and as an oversight tool for informing Congress about the pace, costs, and results of these efforts. Since these other documents were written by different agencies at different points in time, the information in them is not directly comparable, which diminishes their value. State and DOD have separately released budget requests totaling about $121 billion to continue U.S. stabilization and reconstruction programs through fiscal year 2007. However, these documents do not provide an estimate or range of estimates as to what it will cost to achieve U.S. objectives in Iraq in the short-, medium-, and long-term. In addition, these documents further disperse information about how the government is addressing the key elements of an effective national strategy for Iraq. The November 2005 NSVI represents the results of efforts to improve the strategic planning process for the challenging and costly U.S. mission in Iraq. Although the NSVI is an improvement over earlier efforts, it and the supporting documents are incomplete. The desired end-state of the U.S. strategy has remained unchanged since 2003, but the underlying assumptions have changed in response to changing security and economic conditions, calling into question the likelihood of achieving the desired end-state. Moreover, the collective strategy neither identifies U.S. or other resources needed to implement the objectives nor does it address its integration with the efforts and funding plans of the Iraqi government or the international community. The formation of the new Iraqi government provides an opportunity to the United States government to reexamine its strategy and more closely align its efforts and objectives with those of the Iraqi people and other donors. The dispersion of information across the NSVI and seven supporting documents further limits the strategy’s usefulness as a tool for planning and reporting on the costs, progress, and results of the U.S. mission in Iraq. Since the current disparate reporting mechanisms do not provide a comprehensive assessment of U.S. government efforts in Iraq, Congress may lack critical information to judge U.S. progress in achieving objectives and addressing key political, security, and economic challenges. In addition, the strategy could be more useful to implementing agencies and Congress if it fully addressed these characteristics in a single document. To help improve the strategy’s effectiveness as a planning tool and to improve its usefulness to Congress, this report recommends that the National Security Council, in conjunction with DOD and State, complete the strategy by addressing all six characteristics of an effective national strategy in a single document. In particular, the revised strategy should address the current costs and future military and civilian resources needed to implement the strategy, clarify the roles and responsibilities of all U.S. government agencies involved in reconstruction and stabilization efforts, and detail potential Iraqi and international contributions to future military and reconstruction needs. We provided a draft of this report to the NSC and to the Departments of Defense and State for their review and comment. We received a written response from State that is reprinted in appendix III. State also provided us with technical comments and suggested wording changes that we incorporated as appropriate. DOD deferred comment to the NSC; its letter is reprinted in appendix IV. We did not receive oral or written comments from the NSC in response to our request. State did not comment on our report recommendations. In commenting on a draft of this report, State asserted that our draft report misrepresented the NSVI’s purpose—to provide the public a broad overview of the U.S. strategy in Iraq and not to provide details available elsewhere. We acknowledge that the purpose of the NSVI was to provide the public with an overview of a multitiered, classified strategy and not to set forth every detail on information readily available elsewhere. Our analysis was not limited to the publicly available, unclassified NSVI. With input from DOD and State, we included in our assessment all the classified and unclassified documents that collectively define the U.S. strategy in Iraq: (1) the National Security Presidential Directive 36 (May 2004), (2) Multinational Forces-Iraq (MNF-I) Campaign Plan (August 2004), (3) the MNF-I/ U.S. Embassy Baghdad Joint Mission Statement on Iraq (December 2005), (4) the Multinational Corps-Iraq Operation Order 05-03 (December 2005), (5) the National Strategy for Supporting Iraq (updated January 2006), and (6) the quarterly State Section 2207 reports to Congress (through April 2006), and (7) the April 2006 Joint Campaign Plan issued by the Chief of Mission and the Commander of the MNF-I. Collectively, these documents still lack all the key characteristics of an effective national strategy. However, we refined our recommendation to focus on the need to improve the U.S. strategy for Iraq. We are sending copies of this report to interested congressional committees. We will also make copies available to other on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. As part of GAO’s review of reconstruction and stabilization efforts in Iraq initiated under the Comptroller General’s authority, we examined the U.S. strategy for achieving victory in Iraq. Specifically, we (1) assess the evolution of the U.S. national strategy for Iraq in response to changing political, security and economic circumstances and (2) evaluate whether the November 2005 National Strategy for Victory in Iraq (NSVI) and its supporting documents include the desirable characteristics of an effective national strategy. In this report, the NSVI and its supporting documents are referred to as the U.S. strategy for Iraq. To describe the goals and objectives of the U.S. national strategy for Iraq and its relationship to other existing strategy documents, we interviewed Coalition Provisional Authority (CPA), U.S. government, and Iraqi officials, and reviewed planning and reporting documents obtained from the former CPA; Departments of State (State) and Defense (DOD), and U.S. Agency for International Development; the U.S. mission in Baghdad; and the Multinational Forces-Iraq (MNF-I). We analyzed records, reports and data from the Iraqi government, and from U.S. government and military officials in Washington, D.C., and Baghdad, Iraq. We also examined the reports of other oversight entities that performed internal control and management reviews, including audits of the Special Inspector General for Iraq and internal U.S. Mission Baghdad reports and briefings. We also collected and reviewed documents from the United Nations, the World Bank, the International Monetary Fund, and the Iraqi government‘s National Development Strategy for 2005-2007. We evaluated the NSVI along with seven related classified and unclassified supporting documents identified as having key details about the strategy by State’s Office of the Coordinator for Iraq, the Bureau of Near Eastern Affairs, and by DOD’s Defense Reconstruction Support Office and Near Eastern South Asian Affairs office. These included (1) the National Security Presidential Directive 36 (May 2004), (2) the MNF-I Campaign Plan (August 2004), (3) the MNF-I/ U.S. Embassy Baghdad Joint Mission Statement on Iraq (December 2005), (4) the Multinational Corps-Iraq Operation Order 05- 03 (December 2005), (5) the National Strategy for Supporting Iraq (updated January 2006), (6) the quarterly State’s 2207 reports to Congress (January and April 2006); and (7) the April 2006 Joint Campaign Plan issued by the Chief of Mission and the Commander of the MNF-I. In particular, we discussed the relationship between the NSVI, the National Strategy for Supporting Iraq (NSSI), and the MNF-I Campaign Plan with the Secretary of State’s Special Coordinator for Iraq and his staff, National Security Council staff, and DOD’s Office of the Secretary of Defense and the Defense Reconstruction Support Office. In addition to these documents, we also reviewed other U.S. government documents not identified as key supporting documents by State and DOD officials but which also provide useful information, including the fiscal year 2006 supplemental funding request, the fiscal year 2007 budget request, and two reports issued by State in February 2006: Rebuilding Iraq: U.S. Achievements Through the Iraq Relief and Reconstruction Fund; and Advancing the President’s National Strategy for Victory in Iraq: Funding Iraq’s Transition to Self-Reliance in 2006 and 2007 and Support for the Counterinsurgency Campaign. We also reviewed DOD’s periodic reports on the status of its security and stability programs financed by the fiscal year 2005 supplemental Iraq Security and Stabilization Fund (ISSF) and DOD’s report to Congress under Section 1227 of National Defense Authorization Act for Fiscal Year 2006 (Pub. L. No. 109-163). Finally, we reviewed the NSVI for consistency with the administration’s National Security Strategy of the United States of America released in March 2006. To assess whether the NSVI contains all the desirable characteristics of an effective national strategy, we first developed a checklist using the six desirable characteristics of an effective national strategy developed in prior GAO work as criteria. Three analysts independently assessed two selected strategy documents using the checklist to verify its relevance and then convened as a panel to test their ability to apply the checklist to the information contained in the document. The team concluded that the checklist was relevant and appropriate for assessing the NSVI. The three analysts independently assessed the NSVI and recorded the results on separate checklists and then met as a panel to reconcile the differences in their scores. A separate panel of three other analysts also independently assessed the NSVI using the same methodology, and then the two panels met as a group to discuss similarities and resolve differences in their scoring. In addition, the first panel of three analysts evaluated seven additional documents applying the same criteria in the checklist. On the basis of these evaluations, we developed a consolidated summary of the extent that the NSVI and the supporting documents addressed the 27 elements and six characteristics of an effective national strategy. These results are presented in figure 3 of this report. We gave each of the 27 elements under the six characteristics an individual rating of either: “addresses,” “partially addresses,” or “does not address.” According to our methodology, a strategy “addresses” an element of a characteristic when it explicitly cites all parts of the element, and the document has sufficient specificity and detail. Within our designation of “partially addresses,” there is a wide variation between a strategy that addresses most parts of an element of a characteristic and a strategy that addresses few parts of an element of a characteristic. A strategy “does not address” an element of a characteristic when it does not explicitly cite or discuss any parts of the element of that characteristic or any implicit references are either too vague or general to be useful. See appendix II for a more detailed description of the six characteristics. We further evaluated the six related classified and unclassified documents that State and DOD officials said provided key details about the strategy. Three analysts evaluated each of these documents using the same methodology described above. We conducted our review from October 2005 through June 2006 in accordance with generally accepted government auditing standards. In a prior report, GAO identified six desirable characteristics of an effective national strategy that would enable its implementers to effectively shape policies, programs, priorities, resource allocations, and standards and that would enable federal departments and other stakeholders to achieve the identified results. GAO further determined in that report that national strategies with the six characteristics can provide policy makers and implementing agencies with a planning tool that can help ensure accountability and more effective results. To develop these six desirable characteristics of an effective national strategy, GAO reviewed several sources of information. First, GAO gathered statutory requirements pertaining to national strategies, as well as legislative and executive branch guidance. GAO also consulted the Government Performance and Results Act of 1993, general literature on strategic planning and performance, and guidance from the Office of Management and Budget on the President’s Management Agenda. In addition, among other things, GAO studied past reports and testimonies for findings and recommendations pertaining to the desirable elements of a national strategy. Furthermore, we consulted widely within GAO to obtain updated information on strategic planning, integration across and between the government and its partners, implementation, and other related subjects. GAO developed these six desirable characteristics based on their underlying support in legislative or executive guidance and the frequency with which they were cited in other sources. GAO then grouped similar items together in a logical sequence, from conception to implementation. Table 2 provides these desirable characteristics and examples of their elements. The following sections provide more detail on the six desirable characteristics. This characteristic addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. For example, a strategy should discuss the specific impetus that led to its being written (or updated), such as statutory requirements, executive mandates, or other events like the global war on terrorism. Furthermore, a strategy would enhance clarity by including definitions of key, relevant terms. In addition to describing what it is meant to do and the major functions, mission areas, or activities it covers, a national strategy would ideally address its methodology. For example, a strategy should discuss the principles or theories that guided its development, the organizations or offices that drafted the document, or working groups that were consulted in its development. This characteristic addresses the particular national problems and threats at which the strategy is directed. Specifically, this means a detailed discussion or definition of the problems the strategy intends to address, their causes, and operating environment. In addition, this characteristic entails a risk assessment, including an analysis of the threats to and vulnerabilities of critical assets and operations. If the details of these analyses are classified or preliminary, an unclassified version of the strategy should at least include a broad description of the analyses and stress the importance of risk assessment to implementing parties. A discussion of the quality of data available regarding this characteristic, such as known constraints or deficiencies, would also be useful. This characteristic addresses what the national strategy strives to achieve and the steps needed to garner those results, as well as the priorities, milestones, and performance measures to gauge results. At the highest level, this could be a description of an ideal end-state, followed by a logical hierarchy of major goals, subordinate objectives, and specific activities to achieve results. In addition, it would be helpful if the strategy discussed the importance of implementing parties’ efforts to establish priorities, milestones, and performance measures, which help ensure accountability. Ideally, a national strategy would set clear desired results and priorities, specific milestones, and outcome-related performance measures while giving implementing parties flexibility to pursue and achieve those results within a reasonable time frame. If significant limitations on performance measures exist, other parts of the strategy should address plans to obtain better data or measurements, such as national standards or indicators of preparedness. This characteristic addresses what the strategy will cost, the sources and types of resources and investments needed, and where those resources and investments should be targeted. Ideally, a strategy would also identify appropriate mechanisms to allocate resources. Furthermore, a national strategy should elaborate on the risk assessment mentioned earlier and give guidance to implementing parties to manage their resources and investments accordingly. It should also address the difficult, but critical, issues about who pays and how such efforts will be funded and sustained in the future. Furthermore, a strategy should include a discussion of the type of resources required, such as budgetary, human capital, information, information technology (IT), research and development (R&D), procurement of equipment, or contract services. A national strategy should also discuss linkages to other resource documents, such as federal agency budgets or human capital, IT, R&D, and acquisition strategies. Finally, a national strategy should also discuss in greater detail how risk management will aid implementing parties in prioritizing and allocating resources, including how this approach will create society-wide benefits and balance these with the cost to society. Related to this, a national strategy should discuss the economic principle of risk-adjusted return on resources. This characteristic addresses what organizations will implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts. It helps to answer the question about who is in charge during times of crisis and during all phases of the victory in Iraq efforts: prevention, vulnerability reduction, and response and recovery. This characteristic entails identifying the specific federal departments, agencies, or offices involved, as well as the roles and responsibilities of private and international sectors. A strategy would ideally clarify implementing organizations’ relationships in terms of leading, supporting, and partnering. In addition, a strategy should describe the organizations that will provide the overall framework for accountability and oversight, such as the National Security Council, Office of Management and Budget, Congress, or other organizations. Furthermore, a strategy should also identify specific processes for coordination and collaboration between sectors and organizations—and address how any conflicts would be resolved. This characteristic addresses both how a national strategy relates to other strategies’ goals, objectives, and activities (horizontal integration)—and to subordinate levels of government and other organizations and their plans to implement the strategy (vertical integration). For example, a national strategy should discuss how its scope complements, expands upon, or overlaps with other national strategies of the Iraqi government and other international donors. Similarly, related strategies should highlight their common or shared goals, subordinate objectives, and activities. In addition, a national strategy should address its relationship with relevant documents from implementing organizations, such as the strategic plans, annual performance plans, or the annual performance reports the Government Performance and Results Act requires of federal agencies. A strategy should also discuss, as appropriate, various strategies and plans produced by the state, local, private, or international sectors. A strategy also should provide guidance such as the development of national standards to link together more effectively the roles, responsibilities, and capabilities of the implementing parties. The following are GAO’s comments on the Department of State’s letter dated June 30, 2006. 1. We notified the Department of State (State) of the scope of our review. After the National Strategy for Victory in Iraq (NSVI) was released in November 2005, we focused our review on whether the new strategy and related planning documents identified by State and the Department of Defense (DOD) addressed the desirable characteristics of an effective national strategy. On February 10, 2006, we met with senior State officials from the Bureau of Near East and Asia and the office of the Senior Advisor to the Secretary of State and Coordinator for Iraq Affairs to describe our plans and methodology for assessing the NSVI. State officials acknowledged our methodology and identified the key documents (both unclassified and classified) that, when combined with the NSVI, served as the collective U.S. strategy for Iraq. 2. We modified figure 1 to place the National Strategy for Supporting Iraq (NSSI) at the strategic level. However, we disagree that the NSSI links goals to resources. In fact, State’s comments note that the NSSI does not specify the future military and civilian resources necessary for achieving U.S. strategic objectives, and it is in the process of incorporating the fiscal year 2006 supplemental budget into the NSSI. Until State completes this linkage, it is difficult to assess whether the NSSI will adequately link goals to resources. 3. We disagree with State’s contention that we did not take into account the fiscal year 2006 supplemental and the fiscal year 2007 budget requests in our assessment of the NSVI. We evaluated these as part of our review. Even though State officials did not include these documents among those they identified as supporting the strategy. In addition, we reviewed other U.S. government documents that provided useful context and information, including two related reports issued by State in February 2006: (1) Rebuilding Iraq: U.S. Achievements Through the Iraq Relief and Reconstruction Fund; and (2) Advancing the President’s National Strategy for Victory in Iraq : Funding Iraq’s Transition to Self-Reliance in 2006 and 2007 and Support for the Counterinsurgency Campaign. 4. We acknowledge that the purpose of the NSVI was to provide the public with an overview of a multitiered, classified strategy and not to set forth every detail on information readily available elsewhere. Our analysis was not limited to the publicly available, unclassified NSVI. With input from DOD and State, we included in our assessment all the classified and unclassified documents that collectively define the U.S. strategy in Iraq: (1) the National Security Presidential Directive 36 (May 2004), (2) Multinational Forces-Iraq (MNF-I) Campaign Plan (August 2004), (3) the MNF-I/ U.S. Embassy Baghdad Joint Mission Statement on Iraq (December 2005), (4) the Multinational Corps-Iraq Operation Order 05-03 (December 2005), (5) the National Strategy for Supporting Iraq (updated January 2006), and (6) the quarterly State Section 2207 reports to Congress (through April 2006), and (7) the April 2006 Joint Campaign Plan issued by the Chief of Mission and the Commander of the MNF-I. Collectively, these documents still lack all the key characteristics of an effective national strategy. However, we refined our recommendation to focus on the need to improve the U.S. strategy for Iraq. 5. We disagree with State’s comment that helping restore essential services to prewar levels was not an assumption of the early U.S. reconstruction strategy. According to the key architects of the original Coalition Provisional Authority plan, restoring essential services to a prewar level was a key assumption of the U.S. strategy. 6. Documents we received from State and the Department of Energy estimated that Iraq’s 2003 actual prewar crude oil production was 2.6 million barrels per day. State did not provide any additional documentation to support their contention. In addition, the 4,300 megawatts figure cited by State is below the postwar peak of 5,400 megawatts and the planned U.S. goal of 6,000 megawatts. 7. We agree that it is not possible to make definitive statements about the number of people nationwide with access to clean drinking water during the prewar period because reliable data did not exist. We have noted this problem in previous reports and testimonies. This report describes U.S. mission efforts announced in December 2005 to develop an improved set of metrics to better estimate the potential impact of U.S. water and sanitation reconstruction efforts on Iraqi households. We reviewed excerpts from this reporting and included it in our report. However, State has not complied with our request to provide us with a complete copy of its metrics plan to better allow us to judge the results of its efforts. 8. As we have previously reported, subsidies for food, fuel, and electricity, rising costs for security forces, and high costs to sustain Iraq’s bureaucracy limit Iraq’s ability to contribute to its own reconstruction efforts. While Iraq budgeted about $5 billion for capital expenditures in 2005, it only provided a few hundred million dollars by the end of the year. Accordingly, it is too early to determine if the Iraqi government will spend the $6.2 billion it has budgeted for capital expenditures in 2006. 9. We clarified the report to characterize the 2003 World Bank study as an initial estimate and not a comprehensive survey. While acknowledging that more than $56 billon will be needed to bring Iraq to a status equivalent to other oil-producing developing nations, State does not think that “costs” have gone up. However, recent State and Department of Energy cost estimates show that the oil infrastructure and electric sectors alone will require about $50 billion in the next several years. In addition, June 2006 reporting from the Department of Energy states that Iraq could need $100 billion or more for long-term reconstruction efforts. 10. We agree that the Iraq and U.S. governments have succeeded in achieving debt relief for Iraq from the Paris Club and commercial creditors. However, there is a significant amount of debt remaining, amounting to $84 billion. This debt includes war reparations that Iraq owes from its invasion of Kuwait. This remaining debt imposes a continuing financial burden on the country. 11. We revised our report to include updated April 2006 figures. 12. We included the $30 billion estimate for the oil sector to illustrate the significant future costs to restore a critical sector—a sector from which Iraq derives 90 percent of its budgetary revenues. State’s Iraq Reconstruction Management Office developed these estimates. In addition, as noted in comment 9 above, Iraq could need $100 billion or more for long-term reconstruction, according to a June 2006 report by the Department of Energy. 13. We agree that it is very difficult to accurately account for corruption as a cost in achieving the overall goals for Iraq. We recognize that State launched an anticorruption strategy in December 2005, but this strategy was not reflected in the documents we reviewed. We included State estimates that help describe the magnitude of the corruption problem. For example, State reports that 10 percent of refined fuels are diverted to the black market, and about 30 percent of imported fuels are smuggled out of Iraq and sold for a profit. 14. The recently announced International Compact could be a useful vehicle for better international coordination, but the details of the compact’s scope and function and linkage to the new donor coordination process have not been specified. The International Reconstruction Fund Facility for Iraq provides a coordination mechanism among United Nations agencies, but its linkage to U.S.- funded projects is also unclear. More importantly, no single document describes how the goals and projects of the United States, Iraq, and the international community are or will be linked to achieve maximum effectiveness and avoid duplication of effort. Stephen M. Lord, Assistant Director; Kelly Baumgartner; Lynn Cothern; Jared Hermalin; B. Patrick Hickey; Rhonda Horried; Guy Lofaro; and Alper Tunca made key contributions to this report. Terry Richardson provided technical assistance. Rebuilding Iraq: Actions Still Needed to Improve Use of Private Security Providers. GAO-06-865T. Washington, D.C.: June 13, 2006. Rebuilding Iraq: Governance, Security, Reconstruction, and Financing Challenges. GAO-06-697T. Washington, D.C.: April 25, 2006. United Nations: Oil for Food Program Provides Lessons for Future Sanctions and Ongoing Reform. GAO-06-711T. Washington, D.C.: May 2, 2006. United Nations: Lessons Learned from Oil for Food Program Indicate the Need to Strengthen UN Internal Controls and Oversight Activities. GAO- 06-330. Washington, D.C.: April 25, 2006. Rebuilding Iraq: Stabilization, Reconstruction, and Financing Challenges. GAO-06-428T. Washington, D.C.: February 8, 2006. Rebuilding Iraq: DOD Reports Should Link Economic Governance and Security Indicators to Conditions for Stabilizing Iraq. GAO-06-152C. Washington, D.C.: October 31, 2005. Rebuilding Iraq: Enhancing Security, Measuring Program Results, and Maintaining Infrastructure Are Necessary to Make Significant and Sustainable Progress. GAO-06-179T. Washington, D.C.: October 18, 2005. Global War on Terrorism: DOD Needs to Improve the Reliability of Cost Data and Provide Additional Guidance to Control Costs. GAO-05-882. Washington, D.C.: September 21, 2005. Rebuilding Iraq: U.S. Assistance for the January 2005 Elections. GAO-05- 932R. Washington, D.C.: September 7, 2005. Rebuilding Iraq: U.S. Water and Sanitation Efforts Need Improved Measures for Assessing Impact and Sustained Resources for Maintaining Facilities. GAO-05-872. Washington, D.C.: September 7, 2005. Rebuilding Iraq: Actions Needed To Improve Use of Private Security Providers. GAO-05-737. Washington, D.C.: July 28, 2005. Rebuilding Iraq: Status of Funding and Reconstruction Efforts. GAO-05- 876. Washington, D.C.: July 28, 2005. Rebuilding Iraq: Preliminary Observations on Challenges in Transferring Security Responsibilities to Iraqi Military and Police. GAO- 05-431T. Washington, D.C.: March 14, 2005. Rebuilding Iraq: Resource, Security, Governance, Essential Services, and Oversight Issues. GAO-04-902R. Washington, D.C.: June 28, 2004. United Nations: Observations on the Oil for Food Program and Iraq's Food Security. GAO-04-880T. Washington, D.C.: June 16, 2004. Contract Management: Contracting for Iraq Reconstruction and for Global Logistics Support. GAO-04-869T. Washington, D.C.: June 15, 2004. Rebuilding Iraq: Fiscal Year 2003 Contract Award Procedures and Management Challenges. GAO-04-605. Washington, D.C.: June 1, 2004. Iraq's Transitional Law. GAO-04-746R. Washington, D.C.: May 25, 2004. State Department: Issues Affecting Funding of Iraqi National Congress Support Foundation. GAO-04-559. Washington, D.C.: April 30, 2004. Recovering Iraq's Assets: Preliminary Observations on U.S. Efforts and Challenges. GAO-04-579T. Washington, D.C.: March 18, 2004. Defense Logistics: Preliminary Observations on the Effectiveness of Logistics Activities During Operation Iraqi Freedom. GAO-04-305R. Washington, D.C.: December 18, 2003. Rebuilding Iraq. GAO-03-792R. Washington, D.C.: May 15, 2003.
According to the National Strategy for Victory in Iraq (NSVI) issued by the National Security Council (NSC), prevailing in Iraq is a vital U.S. interest because it will help win the war on terror and make America safer, stronger, and more certain of its future. This report (1) assesses the evolving U.S. national strategy for Iraq and (2) evaluates whether the NSVI and its supporting documents address the desirable characteristics of an effective national strategy developed by GAO in previous work. In this report, the NSVI and supporting documents are collectively referred to as the U.S. strategy for Iraq. The November 2005 National Strategy for Victory in Iraq and supporting documents incorporate the same desired end-state for U.S. stabilization and reconstruction operations that were first established by the coalition in 2003: a peaceful, united, stable, and secure Iraq, well integrated into the international community, and a full partner in the global war on terrorism. However, it is unclear how the United States will achieve its desired end-state in Iraq given the significant changes in the assumptions underlying the U.S. strategy. The original plan assumed a permissive security environment. However, an increasingly lethal insurgency undermined the development of effective Iraqi government institutions and delayed plans for an early transfer of security responsibilities to the Iraqis. The plan also assumed that U.S. reconstruction funds would help restore Iraq's essential services to prewar levels, but Iraq's capacity to maintain, sustain, and manage its rebuilt infrastructure is still being developed. Finally, the plan assumed that the Iraqi government and the international community would help finance Iraq's development needs, but Iraq has limited resources to contribute to its own reconstruction, and Iraq's estimated future needs vastly exceed what has been offered by the international community to date. The NSVI is an improvement over previous planning efforts. However, the NSVI and its supporting documents are incomplete because they do not fully address all the desirable characteristics of an effective national strategy. On one hand, the strategy's purpose and scope is clear because it identifies U.S. involvement in Iraq as a vital national interest and central front in the war on terror. The strategy also generally addresses the threats and risks facing the coalition forces and provides a comprehensive description of the desired U.S. political, security, and economic objectives in Iraq. On the other hand, the strategy falls short in three key areas. First, it only partially identifies the current and future costs of U.S. involvement in Iraq, including the costs of maintaining U.S. military operations, building Iraqi government capacity at the provincial and national level, and rebuilding critical infrastructure. Second, it only partially identifies which U.S. agencies implement key aspects of the strategy or resolve conflicts among the many implementing agencies. Third, it neither fully addresses how U.S. goals and objectives will be integrated with those of the Iraqi government and the international community, nor does it detail the Iraqi government's anticipated contribution to its future security and reconstruction needs. In addition, the elements of the strategy are dispersed among the NSVI and seven supporting documents, further limiting its usefulness as a planning and oversight tool.
A constitutional role of the federal government is to provide for the common defense, which includes preventing terrorist attacks. The government must prevent and deter attacks on our homeland as well as detect impending danger before attacks occur. Although it may be impossible to detect, prevent, or deter every attack, steps can be taken to reduce the risk posed by the threats to homeland security. Traditionally, protecting the homeland against these threats was generally considered a federal responsibility. To meet this responsibility, the federal government gathers intelligence, which is often classified as national security information. This information is protected and safeguarded to prevent unauthorized access by requiring appropriate security clearances and a “need to know.” Generally, the federal government did not share national level intelligence with states and cities, since they were not viewed as having a significant role in preventing terrorism. Therefore, the federal government did not generally grant state and city officials access to classified information. However, as we reported in June 2002, the view that states and cities do not have a significant role in homeland security has changed since September 11, 2001, and the need to coordinate the efforts of federal, state, and local governments for homeland security is now well understood. Protecting the United States from terrorism has traditionally been a responsibility of the federal government and, typically, the views of states and cities in formulating national policy have not been considered. In the Homeland Security Act of 2002, Congress found that the federal government relies on state and local personnel to protect against terrorist attacks and that homeland security information is needed by state and local personnel to prevent and prepare for such attacks. Congress also found that federal, state, and local governments; and intelligence, law enforcement, and other emergency and response personnel must act in partnership to maximize the benefits of information gathering and analysis to prevent and respond to terrorist attacks. As a result, the act expressed the sense of Congress that federal, state, and local entities should share homeland security information to the maximum extent practicable. Federal, state, and local governments and the private sector were not fully integrated participants before the September 11, 2001, attacks, but the need to integrate them became more widely recognized afterward. In order to develop national policies and strategies to address terrorism issues, senior policymakers obtain information from the intelligence community. The intelligence community uses a cyclic process for intelligence production. Simplified, the intelligence community (1) receives information requirements from policymakers, (2) collects and analyzes the information from its sources, (3) creates intelligence products from the information, (4) disseminates the products to consumers of intelligence, and (5) receives feedback about the usefulness of the information from consumers. This process can lead to additional information requirements and is ongoing. Since the late 1940s, the federal government generally separated law enforcement and intelligence functions, although both have a role in combating terrorism. From this separation, law enforcement and intelligence were created and handled differently, depending on which community obtained the information and how it was to be used. The law enforcement community investigates criminal activity and supports prosecutions by providing information related to events that have occurred. In contrast, the intelligence community tries to provide policymakers and military leaders with information so that decisions can be made to protect and advance national interests. Often, the intelligence community collects information from sensitive sources or using special methods and keeps the information classified to protect their sources and methods and ensure a continual flow in the future. Executive Order no. 12958, Classified National Security Information, as amended, prescribes a uniform system for classifying, safeguarding, and declassifying national security information, including information related to defense against transnational terrorism. Executive Order no. 12968, Access to Classified Information, states that access to classified national security information is generally limited to persons who have been granted a security clearance, been briefed as to their responsibilities for protecting classified national security information, have signed a nondisclosure agreement acknowledging those responsibilities, and have agreed to abide by all appropriate security requirements. In addition, these persons must have a demonstrated “need to know” the information in connection with the performance of their official functions. If these criteria are not met, then the information is not to be shared. The federal intelligence community has traditionally not always considered states or cities to need access to intelligence that could be used to fight terrorism. As a result, few officials at the state and local levels have the clearances required for access to intelligence products. Furthermore, the collection and use of intelligence information on individuals for domestic law enforcement purposes is constrained by the application of constitutional protections, statutory controls, and rules of evidence. For example, the Foreign Intelligence Surveillance Act of 1978 had, in effect, been interpreted as requiring some separation that limited coordination between domestic law enforcement and foreign intelligence investigations, particularly with regard to the use of information collected for foreign intelligence purposes in criminal prosecutions. Although previous terrorist attacks—such as the 1993 World Trade Center bombing—proved that the United States was not immune to attacks on its homeland, the enormity of the loss of life and impact of the terrorist attacks on September 11, 2001, highlighted the increasing risk of terrorist attacks on U.S. soil. Consequently, federal, state, and city governments recognized an urgent need to effectively unify their efforts to enhance homeland security by employing the unique contribution that each level of government can make on the basis of its capabilities and knowledge of its own environment. After the September 11, 2001, attacks, policymakers questioned the separation between law enforcement and intelligence, noting that the distinctions may limit access to some information needed to effectively execute homeland security duties. In October 2001, Congress passed the USA PATRIOT Act, to improve the sharing of information between the intelligence and law enforcement communities, such as by providing federal investigators with more flexibility in sharing information obtained under the authority of the Foreign Intelligence Surveillance Act. In October 2002, the Senate Select Committee on Intelligence: Joint Investigation inquiry into the attacks found problems in maximizing the flow of relevant information both within the Intelligence Community as well as to and from those outside the community. The review found that the reasons for these information disconnects can be, depending on the case, cultural, organizational, human, or technological. The committee recommended that comprehensive solutions, while perhaps difficult and costly, must be developed and implemented if we are to maximize our potential for success in the war against terrorism. At the same time, recognizing a need to balance the protection of information with the emerging homeland security requirements of those that had a newly recognized need-to-know, Congress passed the Homeland Security Act of 2002 to, among other purposes, specifically facilitate information sharing. In creating the Department of Homeland Security, the act gives the Secretary the responsibility to coordinate with other executive agencies, state and local governments, and the private sector in order to prevent future attacks. Among other responsibilities, the Secretary is to coordinate the distribution of information between federal agencies and state and local governments. Furthermore, the act requires the new department’s Under Secretary for Information Analysis and Infrastructure Protection to disseminate, as appropriate, information analyzed by the department to other federal, state, and local government agencies with homeland security roles; to consult with state and local governments to ensure appropriate exchanges of information (including law-enforcement-related information) relating to threats of terrorism; and to coordinate with elements of the intelligence community and with federal, state, and local law enforcement agencies, and the private sector, as appropriate. Additionally, a subtitle of the Homeland Security Act, titled the Homeland Security Information Sharing Act, requires the President of the United States to prescribe and implement governmentwide procedures for determining the extent of sharing, and for the actual sharing, of homeland security information between federal agencies and state and local personnel, and for the sharing of classified (and sensitive but unclassified) information with state and local personnel. To date, these procedures have not been promulgated, although the President has recently assigned this function to the Secretary of Homeland Security. Furthermore, several national strategies that have been developed include information sharing as major initiatives. Both the National Strategy for Homeland Security and the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets include, as objectives, improving information sharing between intelligence and law enforcement agencies at all levels of government. In addition, FBI increased the number of its Joint Terrorism Task Forces, from 35, as of September 11, 2001, to 66, as of March 2003. Federal, state, and local law enforcement officials can interact to prevent terrorist attacks and share information in investigations of terrorist events through the task forces. State and city governments have also implemented several initiatives to improve the information-sharing process, both within their jurisdiction as well as with participants from other levels of government. Congress passed legislation and the President issued strategic plans to improve the sharing of information to fight terrorism. The Department of Homeland Security was given the responsibility to coordinate the distribution of information between federal agencies, and state and local governments, and private industry. However, the department is in the early phases of determining how to execute this responsibility. In the meantime, some federal agencies and state and city governments undertook initiatives on their own to improve sharing. However, these actions are not well coordinated and consequently risk duplicating efforts. In addition, without coordination, these actions may not be mutually reinforcing and may create information-sharing partnerships that do not necessarily include all agencies needing access to the information. After the September 11, 2001, attacks, Congress took legislative action to improve information sharing. Several national strategies, such as the National Strategy for Homeland Security contain actions to improve sharing as well. The Homeland Security Act directs the President to prescribe and implement procedures for sharing homeland security information between federal agencies and with appropriate state and local government personnel (a function since assigned by the President to the Secretary of Homeland Security). The act also created the Department of Homeland Security, which consolidated 22 federal agencies with homeland security missions into a single department. Within the department, the Office of State and Local Government Coordination and the Office of Private Sector Liaison were created to provide state and local governments and appropriate private-sector representatives with regular information, research, and technical support to assist local efforts at securing the homeland. According to the department, these offices will give these participants one primary federal contact instead of many to meet their homeland security needs. Since September 11, 2001, the administration has developed several strategies containing actions to improve information sharing and charge DHS, FBI, and other government components with responsibility to perform these actions. For example, the National Strategy for Homeland Security (July 2002), the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets (Feb. 2003), and the National Strategy to Secure Cyberspace (Feb. 2003) have, as one of their priorities, actions to promote information sharing between federal agencies and with state and city governments, law enforcement and intelligence agencies, and the private sector. The National Strategy for Homeland Security specifies that the federal government will “build a national environment that enables the sharing of essential homeland security information horizontally across each agency of the federal government and vertically among federal, state, and local governments, private industry, and citizens” by integrating all participants and streamlining the sharing process. The strategy contains initiatives to declassify documents to facilitate sharing, integrate databases at all levels of government, and provide for a secure method of sharing information. Similarly, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets has initiatives to facilitate information sharing by improving processes for domestic threat data collection, analysis, and dissemination to state and local governments as well as with private industry. This strategy calls on DHS to lead the effort to (1) define sharing requirements, (2) establish processes for providing and receiving information, and (3) develop technical systems to share sensitive information with public-private stakeholders. The National Strategy to Secure Cyberspace has initiatives to improve and enhance public-private information sharing involving cyber attacks by establishing, among other things, protocols for ensuring that information voluntarily provided by the private sector is securely stored and maintained. The Department of Homeland Security has several initiatives to improve the sharing of information that could be used to protect the homeland. In particular, it is developing a homeland security enterprise architecture that, among other actions, will integrate sharing between federal agencies and between the federal government, state and city governments, and the private sector. According to DHS, its enterprise architecture is a business- based framework for cross-agency improvement and will provide DHS with a new way of describing, analyzing, and integrating the data from the agencies, thus enabling DHS to “connect the dots” to better prevent terrorist attacks and protect people and infrastructure from terrorism. Architecture working groups were established to collect, organize, and publish the baseline information-sharing structure for the major components that were transitioned to DHS. According to DHS officials, this effort will be completed by June 2003. The working groups will also be used to integrate the state and city governments, and the private sector. By September of 2003, the department anticipates it will have a plan that provides a phased approach to achieving information sharing between the federal government, states, cities, and the private sector. The department anticipates beginning to implement the plan in November 2003. Other federal agencies, and state and city homeland security participants have implemented several initiatives to promote information sharing, yet these initiatives are not well coordinated and may inadvertently limit access to information to those entities that are not part of the initiatives. Nonetheless, the initiatives seek to fulfill a perceived information requirement not yet fully addressed by the federal intelligence community, and include both technological solutions as well as management and communication solutions. However, these initiatives may be duplicating DHS and other federal efforts already under way, and, in some cases, may create information-sharing partnerships that actually limit access to information to only those agencies that are party to the initiatives. Sensing an urgency to improve their abilities to effectively perform their homeland security duties, other federal agencies, and state and city participants have implemented several initiatives to promote sharing with others from different levels of government. However, it is unclear how these initiatives, while enhancing individual organization sharing, will contribute to national information-sharing efforts. The Departments of Defense and Justice have established initiatives using technology to better gather, analyze, and share information with other homeland security participants. These initiatives include expanding existing mechanisms for sharing; participating in information-sharing centers like FBI’s Joint Terrorism Task Forces; establishing new information-sharing centers; and working with federal, state, and city agencies to integrate databases. Also, the new Terrorist Threat Integration Center, which began operations May 1, 2003, was created to fuse, analyze, and share terrorist-related information collected domestically and abroad. It is an interagency joint venture that reports directly to the Director of Central Intelligence in his capacity as statutory head of the intelligence community. The center will be comprised of elements of DHS, FBI’s Counterterrorism Division, the Director of Central Intelligence Counterterrorist Center, the Department of Defense, and other participating agencies. According to the President, the center is to “close the seam” between the analysis of foreign and domestic intelligence and will have access to all sources of information. In responding to our survey, 85 percent (or 34 of 40) of the responding states and 70 percent (or 160 of 228) of the responding cities said they were currently participating in information-sharing centers, including FBI’s Joint Terrorism Task Forces. Nonetheless, according to the survey results, many participants expressed a need for still more interaction with other homeland security participants to coordinate planning, develop contacts, and share information and best practices. In addition to the federal government, several states and cities have implemented their own initiatives to improve sharing. For example, the state of California has established a clearinghouse for all terrorist-related activities and investigations. The clearinghouse collects, analyzes, and disseminates information to its law enforcement officers, other law enforcement agencies, and FBI. The City of New York established a counterterrorism committee comprising FBI, the New York State Office of Public Security, and the New York City Police Department to share information and promote joint training exercises. Officials from the Central Intelligence Agency acknowledged that states’ and cities’ efforts to create their own centers are resulting in duplication and that some cities may be reaching out to foreign intelligence sources independently from the federal government. These officials emphasized that state and local authorities should work through the Joint Terrorism Task Forces to receive the information they require. Appendix II contains examples of other initiatives that various information-sharing participants have expanded and/or implemented to protect the homeland since September 11, 2001. In written comments to our survey, some respondents indicated that avoiding duplication and redundancy were some of the reasons they were not joining or establishing new information-sharing centers. For example, rather than establishing local or regional databases—as some states and cities have done—some respondents recommended creating a national terrorism intelligence and information network and computer database. However, in order to build a comprehensive national plan that integrates multiple sharing initiatives (including those that integrate databases), the federal government must first be aware of these efforts. In a speech to the National Emergency Managers Association in February 2003, the Secretary of Homeland Security asked states to inform his department of newly created initiatives when they learn of them. However, it is not clear if states and cities have provided DHS with this information and whether DHS has taken actions on the basis of the information. As a result, federal efforts to integrate initiatives may overlook some state or city initiatives that could help to improve information sharing and enhance homeland security. Another way that information-sharing initiatives may limit access to information for some entities is through partnerships that promote information sharing between the partners but exclude those not participating. Some federal agencies may try to meet their information needs by forming partnerships with other agencies outside the purview of DHS and its ongoing national strategy efforts. Thus, these organizations may concentrate on local threat information and unknowingly have vital information that, when combined with national or regional information, could indicate an impending attack or help prepare for an attack. In spite of legislation, strategies, and initiatives to improve information sharing, federal agencies and state and city governments generally do not consider the current information- and intelligence-sharing process to be effective. The documents that we reviewed, and officials from federal agencies, states, and cities we interviewed, indicated that they did not perceive the sharing process as working effectively. And, in our survey, fewer than 60 percent of federal, state, and city respondents rated the current sharing process as “effective” or “very effective.” Respondents identified three systemic problems. First, they believe that needed information is not routinely provided. Second, the information that they do receive is not always timely, accurate, or relevant. Third, they feel that the federal government still perceives the fight against terrorism to be generally a federal responsibility and consequently does not integrate state and city governments into the information-sharing process. An information-sharing process characterized by such systemic problems or shortcomings could contribute to a failure to detect a pending attack or prepare for an attack. According to recent reports and testimony, further improvement is needed in the information-sharing process to better protect the homeland. Federal officials have stated that information-sharing problems still exist. We have also expressed concerns about information sharing in previous reports and testimonies, as shown in the following examples: Inquiries into the events of September 11, 2001, have highlighted ongoing problems with the existing sharing process and the need for improvement. Both the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence have, in a joint inquiry in 2002, stated that much information exists in the files and databases of many federal, state, and local agencies. However, that information is not always shared or made available in timely and effective ways to decision makers as well as analysts to better accomplish their individual missions. In October 2002, the Staff Director of the Joint Inquiry Staff that investigated the September 11, 2001, intelligence issues testified that information sharing was inconsistent and haphazard. On December 15, 2002, the Gilmore Commission concluded that information sharing had only marginally improved since the September 11, 2001, attacks, and that despite organizational reforms, more attention, and better oversight, the ability to gather, analyze, and disseminate critical information effectively remained problematic. Additionally, the commission reported that current information-sharing practices neither transfer to local authorities the information they need, nor adequately assesses the information collected by local authorities. We have also expressed concerns about homeland security in previous reports and testimonies that documented the lack of standard protocols for sharing information and intelligence; the lack of partnerships between the federal, state, and local governments; and the lack of a unified national effort to improve the sharing process. In those reports, we concluded that more effort is needed to integrate the state and local governments into the national sharing process. In our report on the integration of watch list databases that contain information on known terrorists, we found that sharing is more likely to occur between federal agencies than between federal agencies and state or local government agencies because of overlapping sets of data and different policies and procedures. Our work involving the interviewing of cognizant officials, reviewing information-sharing documents, and analyzing the results of our survey indicated that information-sharing participants do not perceive the current process as “effective” or “very effective.” Without an effective sharing process, it is not clear how important information obtained by federal, state, or city agencies could be connected to relevant information held by other agencies and potentially pointing to an imminent attack. In a position paper, the Major Cities Chiefs Association stated that the federal government needed to better integrate the thousands of local police officers into the sharing process and by not doing so, the federal government is not taking advantage of their capabilities. In March 2002, the National Governors Association stated that law enforcement and public safety officers do not have access to complete, accurate, and timely information. As a result, critical information is not always shared at key decision points, sometimes with tragic consequences. The International Association of Chiefs of Police testified in June 2002 that the current sharing process is not effective because state and city governments are not fully integrated into a national sharing process. We conducted our survey nearly a year later and found little change. Our survey results indicate that participants do not perceive the current sharing of information to fight terrorism to be “effective” or “very effective,” regardless of the level of government with whom they shared information. In our survey we asked all respondents to indicate the extent of effectiveness when they shared information with the other government levels. For example, we asked the federal respondents to rate their responses from “not effective” to “very effective” when they shared information with other state and city governments. Table 2 shows the different perceived levels of effectiveness within the three levels of government. As shown in table 2, generally fewer that 60 percent of the respondents felt that the information-sharing process was “effective” or “very effective.” In particular, only 13 percent of the federal agencies that completed our survey reported that when sharing information with the states and cities, the current process was “effective” or “very effective.” One reason for this low percentage may be due to the historic reluctance of the federal government to share terrorism information with states and cities. On the other hand, 51 percent of large-city respondents reported that their sharing relationships with states was “effective” or “very effective,” reflecting a closer historic relationship that cites have with their states. Federal, state, and city authorities do not perceive the current sharing process as “effective” or “very effective” because they believe (1) that they are not routinely receiving the information they believe they need to protect the homeland; (2) that when information is received, it is not very useful, timely, accurate, or relevant; and (3) that the federal government still perceives the fight against terrorism to be generally a federal responsibility. Consequently, comprehensive policies and procedures to effectively integrate state and city governments into the process of determining requirements, analyzing and disseminating information, and providing feedback have not been established. As a result, opportunities may be routinely missed to engage state and city officials in obtaining information from the federal government and providing the federal government with information that could be important in the war against terrorism. The federal, state, and city officials that completed our survey indicated that certain information was perceived to be extremely important to execute their homeland security duties, but they reported that they were not routinely receiving it. In the survey, we listed different types of homeland-security-related information and asked all respondents to indicate the extent to which they needed and received the information. With few exceptions, the federal, state, and city agencies that completed our survey indicated that they are typically receiving less than 50 percent of the categories of information they seek. While our survey results found that state and local agencies were generally dissatisfied with the results of information sharing with the federal government, federal agencies were just as dissatisfied with the flow of information from state and city agencies. As shown in table 3, the majority of the states and cities reported that they needed many of the types of information listed in our survey question. For example, 90 to 98 percent of the states and large and small cities that completed our survey reported that they needed specific and actionable threat information; yet only 21 to 33 percent of them reported that they received this information. However, more than 50 percent of all respondents reported that they were receiving needed broad threat information. One reason that states and cities may not receive needed threat information is that the information may not be available. For example, actionable threat information is rarely available according to federal intelligence officials we interviewed; however, if available, these officials told us that they would not hesitate to provide those who needed it with the information. Nonetheless, if the information is classified, Executive Order no. 12968 specifies that the information is not to be shared unless the would-be recipients have the proper security clearances and a need-to-know. Thus, the issue arises of how actionable threat information can be shared with state and local personnel without unauthorized disclosure of classified information by federal officials. Longstanding agency practices may also account for poor information sharing and may include the institutional reluctance of federal officials to routinely share information with local law enforcement officials. Without the information that they feel they need, states and cities, as well as the federal government, may not be adequately prepared to deter future attacks. Consequently, the nation’s ability to effectively manage the risk of future attacks may be undermined. For example, the National Governors Association, the National League of Cities, and the National Emergency Management Association have all stated that they need timely, critical, and relevant classified and nonclassified information about terrorist threats so that they can adequately prepare for terrorist attacks. And the Major Cities Chiefs Association stated that law enforcement officers need background information on terrorism, the methods and techniques of terrorists, and the likelihood of an imminent attack. With this information, the association believes that law enforcement would have the background from which it could take seemingly random or unconnected events—such as minor traffic violations—and place them into a larger context, thereby being able to perceive a bigger picture of potential attack or recognize the need to pass the information to an appropriate homeland security partner agency. Our survey results confirm the perception that the information that respondents do receive is not often seen as timely, accurate, or relevant. And, of the three aspects, respondents reported that timeliness was more of a problem than accuracy or relevancy. This supports a common complaint we heard from police chiefs—that they wanted timely information but would often receive information from national news sources at the same time that the public received it. This lack of timeliness was often attributed to the federal government’s historic reluctance to share this type of information with local law enforcement officials. In the survey, we asked all respondents to indicate the extent to which the information they received from each other was timely, accurate, and relevant. Generally no level of government, including the federal government, was satisfied with the information received from the federal government, as shown in table 4. In particular, table 4 highlights these problems for large cities. Only 23 percent of the large cities reported that the information they received from the federal government was timely, and only 39 percent reported that it was accurate. Only 40 percent reported that the information received was relevant. When state agencies were the source of information, federal and city agencies were also dissatisfied, as shown in table 5. Table 5 shows that in general, large and small cities view the information they receive from their state as more timely, accurate, and relevant than when compared with the view of federal agencies when they receive information from the states. Few of the federal agencies that responded view state information received as timely, accurate, or relevant. Similarly, few federal or state agencies that responded to our survey viewed information received from the cities as timely, accurate, or relevant, as shown in table 6. Table 6 also shows that states view the information they receive from cities more favorably than the federal agencies that responded to our survey. The nation’s fight against terrorism is still generally perceived to be a federal responsibility, at least in terms of preventing (in contrast to responding to) a terrorist attack. Even though states and cities develop important information on potential terrorist threats to the homeland, the federal government still has not established comprehensive policies or procedures to effectively integrate state and city governments into the process of determining requirements; gathering, analyzing, and disseminating information; and providing feedback. Nor has the federal government routinely recognized states and cities as customers in the information-sharing process. Our survey results support the view that preventing terrorism is still perceived generally as a federal responsibility. We asked respondents to indicate the extent to which the elements of a sharing framework for receiving information from the federal government—such as clear guidance and access to needed databases—were in place at the various governmental levels. The existence of these elements would indicate to some extent the level that state and city governments were integrated into the sharing process. Specifically, we found that more elements of a sharing framework, such as clear guidance for providing and receiving information, are in place at the federal level than at the state or city level, indicating that terrorism-related information is managed more at the federal level. Moreover, the lack of such elements at the state and city level nearly 2 years after the September 11, 2001, attacks may perpetuate the perception that the fight against terrorism remains generally a federal responsibility. State and city governments that completed our survey also indicated that they do not participate in national policy making regarding information sharing, which also helps maintain the perception. For example, 77 percent of the responding states, 92 percent of large cities, and 93 percent of small cities reported that they did not participate in this policy-making process. By involving states and cities, this process would help ensure a more unified and consolidated effort to protect the homeland, and provide opportunities to improve information sharing at the state and city levels. The view that preventing terrorism is generally a federal responsibility is also reflected in the perception of the existence of barriers to providing information upwards or downwards. For example, according to the December 2002 report of the Gilmore Commission, the prevailing view continues to be that the federal government likes to receive information but is reluctant to share information with other homeland security partners. Furthermore, the commission stated that the federal government must do a better job of designating “trusted agents” at the state and local levels and in the private sector, and move forward with clearing those trusted agents. In our survey, we listed a number of barriers and asked all respondents to indicate the extent to which these barriers hindered sharing with each other. Table 7 identifies the barriers that federal, state, and city agencies that responded to our survey believe exist in the current information-sharing process. As shown in table 7, federal officials cited several barriers that they perceive prevent them from sharing information, including concerns over state and local officials’ ability to secure, maintain, and destroy classified information; their lack of security clearances; and the absence of integrated databases. However, these perceived barriers were seen to exist by only a few respondents and could be overcome. For example, state and local police routinely handle and protect law-enforcement-sensitive information to support ongoing criminal investigations, which suggests that—with proper training and equipment—officials of these governments could handle other types of sensitive information as well. As mentioned earlier, the Homeland Security Act requires the President, in establishing information-sharing procedures, to address the sharing of classified and sensitive information with state and local personnel. Congress suggested in the Homeland Security Act that the procedures could include the means for granting security clearances to certain state and local personnel, entering into nondisclosure agreements (for sensitive but unclassified information), and the increased use of information-sharing partnerships that include state and local personnel. For example, Congress found that granting security clearances to certain state and local personnel is one way to facilitate the sharing of information regarding specific terrorist threats between federal, state, and local levels of government. We found that the federal government has issued security clearances to state or local officials in limited circumstances and is increasing the number of such clearances. The Federal Emergency Management Agency has provided certain state emergency management personnel with security clearances for emergency response purposes, but other federal agencies, including FBI, have not recognized the validity of these security clearances. For FBI, this lack of recognition could prevent it from providing state emergency management personnel with information. At the same time, FBI has undertaken some initiatives to provide certain state officials with clearances and could clearly expand this program at the state and city levels, if officials believe that doing so will address a perceived impediment to information sharing. And DHS is also developing a new homeland security level classification for information to improve sharing. For their part, states and cities reported few barriers in their ability to provide the federal government with information, while federal agencies cited a number of barriers to sharing. As shown in table 7, state and city agencies perceived that the federal government faces few barriers in sharing information. Appendix V details the barriers that states and cities perceive to providing federal authorities with information. All categories of survey respondents identified the lack of integrated information systems as the single most common barrier to information sharing across all levels of government. The Markle Foundation stated in its report that federal agencies have seen the information and homeland security problem as one of acquiring new technology. For example, for fiscal year 2003, FBI budgeted $300 million for new technology, the Transportation Security Administration has budgeted $1 billion over several years, and the former Immigration and Naturalization Service (whose function is now within DHS) has a 5-year plan for $550 million. However, the foundation reports that almost none of this money is being spent to solve the problem of how to share this information between federal agencies and with the states and cities. The foundations’ report states that when it comes to homeland security and using integrated information systems, adequate efforts and investments are not yet in sight. And in recent testimony, we stated that DHS must integrate the many existing systems and processes within government entities and between them and the private sector required to support its mission. With the current decentralized information-sharing process in which actions to improve sharing are not organized, and participants at all levels of government and the private sector are not well integrated into the scheme, the nation may be hampered in its ability to detect potential terrorist attacks and effectively secure the homeland. Additionally, the lack of coordination of the various information-sharing initiatives continues to hamper the overall national effort to effectively share information that could be used to prevent an attack. DHS has initiated an enterprise architecture to provide a road map to address information-sharing issues with all levels of government and the private sector. It is important that this be done in such a way as to effectively integrate all levels of government and the private sector into an information-sharing process. Until then, it is not clear how the department will coordinate the various information-sharing initiatives to eliminate possible confusion and duplication of effort. Participants risk duplicating each other’s efforts and creating partnerships that limit access to information by other participants, thus increasing the risk that decision makers do not receive useful information; developing initiatives that are not mutually reinforcing; and potentially unnecessarily increasing the cost of providing homeland security. The failure to fully integrate state and city governments into the information-sharing policy-making process deprives the federal government of the opportunity to (1) obtain a complete picture of the threat environment and (2) exploit state and city governments’ information expertise for their own areas, with which they are uniquely familiar. Finally, the effectiveness of the information-sharing process to provide timely, accurate, and relevant information is also in question, creating a risk that urgent information will not get to the recipient best positioned to act on it in a timely manner. Until the perceived barriers to federal information sharing are addressed, the federal government may unnecessarily, and perhaps inadvertently, be hampering the state and city governments from carrying out their own homeland security responsibilities. States, cities, and the private sector look to the federal government—in particular the Department of Homeland Security—for guidance and support regarding information-sharing issues. If DHS does not effectively strengthen efforts to improve the information-sharing process, the nation’s ability to detect or prepare for attacks may be undermined. We recommend that, in developing its enterprise architecture, the Secretary of Homeland Security work with the Attorney General of the United States; the Secretary of Defense; the Director, Office of Management and Budget; the Director, Central Intelligence; and other appropriate federal, state, and city authorities and the private sector to ensure that the enterprise architecture efforts incorporate the existing information-sharing guidance that is contained in the various national strategies and the information-sharing procedures required by the Homeland Security Act to be established by the President; establish a clearinghouse to coordinate the various information-sharing initiatives to eliminate possible confusion and duplication of effort; fully integrate states and cities in the national policy-making process for information sharing and take steps to provide greater assurance that actions at all levels of government are mutually reinforcing; identify and address the perceived barriers to federal information sharing; and include the use of survey methods or related data collection approaches to determine, over time, the needs of private and public organizations for information related to homeland security and to measure progress in improving information sharing at all levels of government. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We presented a draft of this report to the Departments of Homeland Security, Defense, and Justice; and to the Director of Central Intelligence. The Departments of Homeland Security, Defense, and Justice provided written comments. The Central Intelligence Agency provided technical comments. All the departments, except the Department of Justice, concurred with our report. The Department of Homeland Security concurred with our report and recommendations. The department added that it has made significant strides to improve information sharing. For example, the department pointed out that it is in the process of providing secure telephones to the governors and security clearances to the Homeland Security Advisors in every state so that relevant classified information can be shared. The department also pointed out that further progress will require a thoughtful, prudent, and deliberate approach. However, it cautioned that issuing the first draft of the national homeland security enterprise architecture could go beyond the September 2003 target because of the time it may take to obtain appropriate interagency coordination. The department’s comments are reprinted in their entirety in appendix VI. DOD concurred with our recommendations. DOD’s comments are reprinted in their entirety in appendix VII. The Central Intelligence Agency provided technical comments that we incorporated into our draft as appropriate. On the other hand, the Department of Justice did not concur with our report and raised several concerns. The department stated that our draft report reaches sweeping and extraordinarily negative conclusions about the adequacy of the governmental sharing of information to prevent terrorism and that (1) our conclusions are fundamentally incorrect and unsupportable by reliable evidence; (2) our review was beyond our purview; and (3) an evaluation of information sharing requires a review of intelligence sharing which by long standing practice the executive branch provides to Congress but not us, thus we may not be able to provide useful information to Congress. We disagree. First, we used reliable evidence from a variety of sources, including the Central Intelligence Agency; the Anser Institute of Homeland Security; the Joint Inquiry into the Terrorist Attacks of September 11, 2001; reports of the RAND Institute and the Markle Task Force on National Security in the Information Age; testimony before congressional committees by federal, state, and local officials; interviews that we conducted with federal, state, and local agency officials and associations representing the International Association of Chiefs of Police, the U.S. Conference of Mayors, the National League of Cities, and the National Sheriffs Association; and our survey results. Moreover, over 100 cities with populations in excess of 100,000, over 120 cities with populations of under 100,000, and 40 states responded to our survey, representing a substantial number of governmental entities providing us with evidence of information-sharing shortcomings. These organizations are involved in information collection and analysis, have conducted well respected studies on information- sharing issues, or have significant experience in providing for homeland security through law enforcement or emergency management at the state and the local level, and are recognized as authorities in their fields of endeavor. Our conclusions are based on this body of evidence. Our complete scope and methodology is shown in appendix I. Second, the Department of Justice stated that “our review of intelligence activities is an arena that is beyond GAO’s purview” and that providing GAO with information on intelligence sharing “would represent a departure from the long-standing practice of Congress and the executive branch regarding the oversight of intelligence activities.” The Department of Justice’s impression that our review was a review of intelligence activities is incorrect. As our report clearly indicates, the oversight of intelligence activities was not an objective or focus of our review, which did not require our access to intelligence information or involve our evaluation of the conduct of actual intelligence activities. Rather, our review considered the use of intelligence information in general in the context of the broader information-sharing roles and responsibilities of various homeland security stakeholders (including the intelligence community). However, even if our review could be construed as involving intelligence activities, we disagree that such a review is outside GAO’s purview. We have broad statutory authority to evaluate agency programs and activities and to investigate matters related to the receipt, disbursement, and use of public money. To carry out our audit responsibilities, we have a statutory right of access to agency records applicable to all federal agencies. Although our reviews in the intelligence area are subject to certain limited restrictions, we regard such reviews as fundamentally within the scope of our authority. Third, as to the department’s assertion that providing GAO with information on intelligence sharing practices would represent “a departure from long-standing practice,” we believe our review in this area furthers congressional oversight but does not require reviewing intelligence sharing practices. For example, we are not aware that the views of state and local government officials on information sharing contained in our report have previously been provided to Congress in a comprehensive manner, their views are not dependent on whether we do or do not have access to intelligence sharing practices, and the department did not indicate that this is the case in asserting that Congress is already receiving sufficient information from the executive branch. Moreover, we did not review the extent to which the executive branch provides useful information to Congress so we cannot comment on the department’s assertion. Nonetheless, as our report clearly discusses, numerous state and local government officials believe that they had not received the information that they need from federal agencies. It would have also been useful, had the department shared with us its views on information sharing for homeland security. We believe Congress should have available such information in making informed decisions in this area. The department’s comments are reprinted in appendix VIII. We are sending copies of this report to appropriate congressional committees. In addition, we are sending copies of the report to the Secretaries of Homeland Security, Defense, Commerce, Agriculture, Transportation, and the Treasury; the Attorney General; the Director of Central Intelligence; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (202) 512-6020 or by E-mail at deckerrj@gao.gov. GAO contacts and staff acknowledgements are listed in appendix IX. Our objectives were to determine (1) what initiatives have been undertaken to improve the sharing of information that could be used to protect the homeland and (2) whether federal, state, and city officials believe that the current information-sharing process is effective. To achieve the first objective, we reviewed documents to determine legislative initiatives and other initiatives detailed in national strategies to include the National Strategy for Homeland Security, the National Strategy for Combating Terrorism, the National Military Strategic Plan of the United States of America, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets, the National Strategy to Secure Cyberspace, and the National Security Strategy of the United States of America. We also reviewed federal, state, and city initiatives to share information. We interviewed officials from the Department of Justice, the Federal Bureau of Investigation (FBI), and the Defense Intelligence Agency on their initiatives to share information with state and city entities, and discussed information or intelligence-sharing policies and procedures with officials from the Central Intelligence Agency; the Department of Defense (DOD), Departments of Commerce, Agriculture, the Treasury, and Transportation; the U.S. Coast Guard; and DOD’s new U.S. Northern Command. We also surveyed a select group of federal, state, and city organizations to obtain information on whether they were involved in information-sharing initiatives. To determine whether the current information-sharing process is perceived as effective by federal, state, and city governments, we interviewed officials from DOD’s Office of the Inspector General and the Defense Intelligence Agency; FBI and the Office of Intelligence Policy and Review within the Department of Justice; the U.S. Coast Guard; the Treasury Department and the U.S. Customs Service; the Department of Commerce; and the U.S. Department of Agriculture. We also interviewed representatives from the California Department of Justice, city and county of Los Angeles law enforcement authorities; the Director of Emergency Management for the District of Columbia; and the chiefs of police of Baltimore, Maryland; and Dallas, Fort Worth, and Arlington, Texas. We also interviewed representatives of professional organizations and research organizations, including the International Association of Chiefs of Police, the National Sheriffs Association, Police Executive Research Forum, the U.S. Conference of Mayors, the National League of Cities, the RAND Institute, the Center for Strategic and International Studies, and ANSER Institute for Homeland Security. To supplement our interviews, we reviewed studies and testimonies before Congress. Among the documents we reviewed are the testimonies of the President of the International Chiefs of Police before the Senate Committee on Governmental Affairs, June 26, 2002; the former Central Intelligence Agency General Counsel before the aforementioned committee, February 14, 2003; and the Chairman of the Advisory Panel to Assess the Capabilities for Domestic Response to Terrorism Involving Weapons of Mass Destruction before the aforementioned committee, February 14, 2003, and also the U.S. Select Committee on Intelligence and the House Permanent Select Committee on Intelligence, October 1, 2002. We also reviewed the position papers of the RAND Institute, International Association of Chiefs of Police, Markle Task Force on National Security in the Information Age, and others. To achieve both objectives, we conducted a survey to augment our interviews and review of testimonies, documents, and position papers. We surveyed all 29 federal intelligence and law enforcement agencies; 50 state homeland security offices; and 485 cities, including all cities with a population of 100,000 or greater, and 242 representing a random sample of cities with a population of between 50,000 and 100,000. The city surveys were directed to the mayors; however, the mayors frequently delegated the task of completing the survey to career employees such as chiefs of police, city managers, directors of emergency management offices, assistants to the mayors, and others. The survey was not sent to the private sector, although we recognize that it has a sizeable role in homeland security by virtue of owning about 80 percent of the critical infrastructure in the United States. The survey collected information on the types of information needed by participants, the extent that this information was received and provided, the sources and usefulness of the information, and the barriers that prevent participants from sharing. However, the survey did not attempt to validate the information needs of any level of government. To ensure the validity of the questions on the survey, we pretested it with officials from the Office of the Secretary of Defense, the Defense Intelligence Agency; the homeland security directors for the states of North Dakota and Florida; the police chiefs from the cities of Dallas, Fort Worth, and Arlington, Texas; and the Director of Emergency Management for the District of Columbia. We subsequently followed up the surveys with several phone calls and E-mail messages to all federal and state agencies surveyed, and a large number of cities to increase our response rate. Of the 485 surveys sent to the cities, 228, or 47 percent, responded. The 257 cities that did not respond might have answered the survey differently from those that did; however, we could not determine this. Therefore, we present the results of those cities that did complete the surveys knowing that the nonresponders could have answered differently. Where applicable in the report, we present the results of large and small cities separately, unless noted otherwise. Also, when presenting survey results, we judgmentally benchmarked the response level we believed would be acceptable for an information-sharing process that is so vital to homeland security. For example, for a process of this importance, we believe that respondents should perceive that the overall sharing process is “effective” or “very effective” and not “moderately effective” or lower. The scope of this review did not include the federal government’s critical infrastructure protection efforts, for which we have made numerous recommendations over the last several years. We also did not include the private sector, although we recognize the importance of this sector in that it owns about 80 percent of the nation’s infrastructure. Critical infrastructure protection efforts are focused on improving the sharing of information on incidents, threats, and vulnerabilities, and the providing of warnings related to critical infrastructures both within the federal government and between the federal government and state and local governments, and the private sector. We conducted our review from June 2002 through May 2003 in accordance with generally accepted government auditing standards. In order to judge the extent of initiatives, judge efforts to share more information, and identify possible duplication of efforts, we gathered documents that outlined these efforts. Also, in our survey, respondents identified initiatives and efforts they were involved with. The following table is not exhaustive, since all respondents did not complete this survey question; however, it illustrates potential duplication of efforts between the federal, state, and city governments. In order to establish a baseline for the information requirements of federal agencies, and state and city government officials, we provided survey respondents with a list of potential types of homeland security information and asked them to indicate what they thought they needed to meet their homeland security objectives. We then asked the respondents to tell us how frequently they received the information they perceived they needed. Table 9 is a summary of the types of information the respondents reported they needed or critically needed and the percentage that they frequently or regularly received the information. For example, 98 percent of state officials reported that they needed or critically needed specific and actionable threat information, while they also reported regularly receiving this type of information only 33 percent of the time. GAO provided a list of criteria that it believes represents elements of a sharing framework and asked respondents to identify which best characterizes their current information-sharing framework. Table 10 shows that at all three levels of government, the sharing framework is incomplete, with cities—and small cities in particular—-having few elements of a sharing framework operational. We asked state, large-city and small-city respondents to identify what they perceive to be factors that hinder their organizations from providing federal authorities with homeland security information or intelligence. In contrast to the several barriers identified by federal respondents to providing state and local officials with information and intelligence, table 11 shows that states and city respondents identified the lack of integrated databases as the only significant barrier. In addition to those named above, Lorelei St. James, Patricia Sari-Spear, Tinh Nguyen, Rebecca Shea, Adam Vodraska, and R.K. Wild made key contributions to this report. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Homeland Security: Effective Intergovernmental Coordination Is Key to Success. GAO-02-1013T. Washington, D.C.: August 23, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Information Sharing: Practices That Can Benefit Critical Infrastructure Protection. GAO-02-24. Washington, D.C.: October 15, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. FBI Intelligence Investigations: Coordination within Justice on Counterintelligence Criminal Matters Is Limited. GAO-01-780. Washington, D.C.: July 16, 2001.
The sharing of information by federal authorities to state and city governments is critical to effectively execute and unify homeland security efforts. This report examines (1) what initiatives have been undertaken to improve information sharing and (2) whether federal, state, and city officials believe that the current information-sharing process is effective. Since September 11, 2001, federal, state, and city governments have established initiatives to improve the sharing of information to prevent terrorism. Many of these initiatives were implemented by states and cities and not necessarily coordinated with other sharing initiatives, including those by federal agencies. At the same time, the Department of Homeland Security (DHS) has initiatives under way to enhance information sharing, including the development of a homeland security blueprint, known as an "enterprise architecture," to integrate sharing between federal, state, and city authorities. GAO surveyed federal, state, and city government officials on their perceptions of the effectiveness of the current information-sharing process. Numerous studies, testimonies, reports, and congressional commissions substantiate our survey results. Overall, no level of government perceived the process as effective, particularly when sharing information with federal agencies. Information on threats, methods, and techniques of terrorists is not routinely shared; and the information that is shared is not perceived as timely, accurate, or relevant. Moreover, federal officials have not yet established comprehensive processes and procedures to promote sharing. Federal respondents cited the inability of state and city officials to secure and protect classified information, the lack of federal security clearances, and a lack of integrated databases as restricting their ability to share information. DHS needs to strengthen efforts to improve the information sharing process so that the nation's ability to detect or prepare for attacks is strengthened.
The growing sophistication and effectiveness of cyber attacks, and the increase of information assurance and information assurance-enabled information technology (IT) products available for use on national security systems, have heightened federal attention to the need for information assurance. As a result of these trends, acquiring commercial IT products that perform as vendors claim on national security systems has become a governmentwide challenge. While not a complete solution, an important way to increase confidence in commercial IT products is through independent testing and evaluation of their security features and functions during design and development. In 1997, NIST and the National Security Agency collaborated to form the NIAP. The purpose of the partnership is to boost consumers’ and federal agencies’ confidence in information security products and enhance the ability of U.S. companies to gain international recognition and acceptance for their products. The five main goals of NIAP are to: promote the development and use of evaluated IT products and systems; champion the development and use of national and international standards for IT security; foster research and development in IT security requirements definition, test methods, tools, techniques, and assurance metrics; support a framework for international recognition and acceptance of IT security testing and evaluations; and facilitate development and growth of a commercial security testing industry within the U.S. To facilitate achievement of these goals, NIAP developed a national program called the Common Criteria Evaluation and Validation Scheme. The program is based on an international standard of general concepts and principles of IT security evaluations for the international community. The program evaluates, through various evaluation assurance levels (see app. II), commercial-off-the-shelf information assurance and information assurance-enabled products for the federal government. These products can be items of hardware, software, or firmware. As part of the evaluation, agencies can specify a degree of confidence desired in a product through protection profiles. While a protection profile is not required in order to have a product evaluated, a vendor is required to develop a security target. NIAP evaluations are performed by accredited Common Criteria testing laboratories. While a product is undergoing evaluation, the NIAP validation body—an activity currently managed by the National Security Agency—approves participation of security testing laboratories in accordance with accreditation policies and procedures. It also reviews the results of the security evaluations performed by the laboratories and issues a validation report, which summarizes and provides independent validation of the results. A product is considered NIAP-certified only after it is both evaluated by an accredited laboratory and validated by the validation body. Upon successful completion of these requirements, the validation body issues a Common Criteria certificate for the evaluated product. All evaluated products that receive a NIAP Common Criteria certificate appear on a validated products list available on NIAP’s Web site. According to the Committee on National Security Systems—a forum for the discussion of policy issues that sets federal policy and promulgates direction, operational procedures, and guidance for the security of national security systems—the fact that a product appears on the validated products list does not by itself mean that it is secure. A product’s listing on any Common Criteria validated products list means that the product was evaluated against its security claims and that it has met those claims. Figure 1 outlines the NIAP evaluation process. In order to maintain the validity of an evaluation when a product upgrades to its next version, a vendor can request either a re-evaluation of the entire new product version or validation of only the changes in the product. To request the latter, a vendor must participate in the NIAP Assurance Maintenance Program. To participate in this program, a vendor must submit a request that addresses how it plans to maintain the product and a report of what will be maintained. Vendors can select any one of the 10 accredited commercial testing laboratories to perform product evaluations. The vendor and testing laboratory negotiate evaluation costs, which can vary according to the laboratory and the assurance level the product is tested against (see fig. 2). NVLAP identifies NVLAP-accredited laboratories on its Web site. Accreditation criteria are established in accordance with the U.S. Code of Federal Regulations (CFR, Title 15, Part 285), NVLAP Procedures and General Requirements, and encompass the requirements of ISO/IEC 17025 and the relevant requirements of ISO 9002. the scope of evaluation—the tendency of vendors to include elements in their security target that agencies may not require introduces additional costs; and the design of the product—if a product is designed so that its security functions are performed by a small number of modules, it may be possible to limit the portion of the product that must be examined. In January 2000, as revised in June 2003, a federal policy was established that required the use of evaluated products for national security systems. Specifically, the Committee on National Security Systems established National Security Telecommunications and Information Systems Security Policy Number 11. The policy required, effective July 1, 2002, that all commercial-off-the-shelf information assurance and information assurance-enabled IT products acquired for use on national security systems be evaluated and validated in accordance with one of the following criteria: 1. The International Common Criteria for Information Security Technology Evaluation Recognition Arrangement,2. The NIAP Common Criteria Evaluation and Validation Scheme, 3. The NIST Federal Information Processing Standards Cryptographic Module Validation Program. The objective of the policy is to ensure that these products, which are acquired by the federal government, undergo a standardized evaluation validating that a product either performs as its claims or meets the user’s security requirements. The policy requires that the evaluation and validation of such products be conducted by accredited commercial laboratories or by the National Security Agency for government off-the shelf products. It does not require mandatory compliance for information assurance products acquired prior to July 1, 2002, and includes a provision for deferred compliance, on a case-by-case basis, when information assurance-evaluated products do not cover the full range of potential user application, or do not incorporate the most current technology. Moreover, while not a requirement, the federal policy includes provisions for departments and agencies who may wish to consider using the NIAP process for the acquisition and appropriate implementation of evaluated and validated products for non-national security systems. The use of commercial products that have been independently tested and evaluated is only a part of a security solution that contributes to the overall information assurance of a product. Other complementary controls are needed, including sound operating procedures, adequate information security training, overall system certification and accreditation, sound security policies, and well-designed system architectures. According to the Committee on National Security Systems, the protection of systems encompasses more than just acquiring the right product. The committee notes that once acquired, these products must be integrated properly and subjected to a system accreditation process, as discussed above, which will help to ensure the integrity of the information and systems to be protected. For federal agencies, such an overall security solution is spelled out by the Federal Information Security Management Act. The act requires federal agencies to protect and maintain the confidentiality, integrity, and availability of their information and information systems. Among other things, the act requires each agency (including agencies with national security systems) to develop, document, and implement agencywide information security programs to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. More specifically, the Federal Information Security Management Act stipulates that the head of each agency operating or exercising control of a national security system is responsible for providing information security protections commensurate with the risk and magnitude of harm that could result should a security breach occur. The act also stipulates that agency heads are responsible for implementing information security policies and practices as required by standards and guidelines for national security systems. The Department of Defense and the Director of Central Intelligence have authority under the act to develop policies, guidelines, and standards for national security systems. The Federal Information Security Management Act also requires NIST, among other things, to provide technical assistance to agencies; to evaluate private sector security policies and practices; to evaluate commercially available IT, as well as practices developed for national security systems; and to assess the potential application by agencies to strengthen information security for non-national systems. While the NIAP evaluation process offers benefits to national security systems, its effectiveness has not been measured or documented, and considerable challenges to acquiring and using NIAP-evaluated products exist. NIAP process participants—vendors, laboratories, federal agencies, and NIAP officials—identified benefits to using the process for use in national security systems, including independent testing and evaluation of IT products and accreditation of the performing laboratories, which can give agencies confidence that the products will perform as claimed; international recognition of evaluated products, which provides agencies broader product selection and reduces vendor burden; discovery of software flaws in product security features and functions, which can cause vendors to fix them; and improvements to vendor development processes, which help to improve the overall quality of current and future products. Independent testing and evaluation of commercial IT products and accreditation of the laboratories that perform the test and evaluations can give agencies increased assurance that the products will perform as vendors claim. Independent testing is a best practice for assuring conformance to functional, performance, reliability, and interoperability specifications—especially for systems requiring elevated levels of security or trust. As discussed previously, NIAP requires vendors to obtain independent testing and evaluation of specific security features and functions that are built into their products. Agencies are able to use the results of validation reports to distinguish between competing products and thus make better-informed IT procurement decisions. Further, the Committee on National Security Systems encourages agencies to review the security target of a product and determine its appropriateness for the environment in which the product will operate. In our survey, 15 of 18 federal agencies reported that they have derived benefits from acquiring and using products evaluated by the NIAP process. Of these 15 agencies, 11 reported that the availability of evaluated products helped the agency make IT procurement decisions; 9 reported that the process provided their agency with thorough and accurate product documentation; and 1 reported that evaluated products provided a common method of performing a particular security service that is implemented in different types of security or security-enabled devices, potentially resulting in a greater degree of standardization of elements (such as audit entries). Moreover, the NIST-administered National Voluntary Laboratory Accreditation Program (NVLAP) reviews laboratories annually to ensure competence and compliance with standards. Accreditation is granted to laboratories following their successful completion of a process that includes an application submission and fee payment by the laboratory, an on-site assessment, participation in proficiency testing, resolution of any deficiencies identified during the process, and a technical evaluation. The issuance of a certificate formally signifies that a laboratory has demonstrated that it meets all NVLAP requirements and operates in accordance with management and the technical requirements of the relevant standards. However, the accreditation does not imply any guarantee of laboratory performance or test and calibration data; it is solely a finding of laboratory competence and compliance with standards. Figure 3 shows the laboratory accreditation process. Another benefit of the NIAP evaluation process is NIAP’s membership in the Arrangement on the Recognition of Common Criteria Certificates in the Field of IT Security. As part of the goals of the arrangement, members can increase the availability of evaluated IT products and protection profiles for national use and eliminate duplicate evaluations of IT products and protection profiles, thus giving agencies a broader selection of evaluated products from which to choose. Agencies have the ability to acquire products that have been evaluated at evaluation assurance levels 1 through 4 from any of the countries that have an evaluation scheme. As of February 2006, there were 22 global signatories to the recognition arrangement, and 247 evaluated products available. The recognition arrangement also reduces the burden on vendors by limiting the number of criteria to which their products must conform and the number of evaluations that a vendor needs to complete in order to sell a product internationally. Because NIAP evaluations (evaluation assurance levels 1-4) are accepted by the arrangement, vendors that go through the NIAP process can sell their evaluated products in any of the 22 member countries. Vendors are able to save time and money since they do not need to complete multiple evaluations to sell their product in different countries. Another benefit of the NIAP process is that it uncovers flaws during product evaluations and can cause vendors to fix them. NIAP, vendor, and laboratory officials stated that the NIAP evaluation process has uncovered flaws and vulnerabilities in evaluated products. According to NIAP officials, software flaws are found in nearly all evaluated products, with an evaluation resulting in an average of two to three fixes. According to the four vendors included in our review, the NIAP evaluation process discovered flaws or vulnerabilities in their products or their product documentation. Also, officials from one of the laboratories included in our review stated that out of the 90 products they have evaluated, all of them had documentation flaws. Although vendors have the option of removing from the evaluation security features or functions in which flaws have been identified, any flaws in the remaining security features or functions must be fixed in order to successfully complete the product evaluation. Nonetheless, agencies procuring NIAP-evaluated products have a higher level of assurance that the product’s security features and functions will perform as claimed in the validation report. Product evaluations can influence vendors to make improvements to their development processes that raise the overall quality of their current and future products. To complete a successful evaluation, vendors submit to laboratories their development documentation, which describes various processes related to security, such as software configuration controls. Officials at six of the seven vendors we visited stated that product evaluations had a positive influence on their development process. According to one of the six vendors, completed product evaluations that result in improvements to their development process would likely transfer to the development process of other products and help improve the overall quality of their products. Laboratory officials also stated that NIAP evaluations often result in vendors improving their software development process because vendors adopt some of the methodologies used to pass evaluation, such as test methods and documentation, for their own quality assurance processes. Additionally, we previously reported that vendors who are proactive and adopt effective development processes and practices can drastically reduce the number of flaws in their products. NIAP process participants—NIAP officials and selected vendors, laboratories, and federal agencies—identified challenges to acquiring and using NIAP-evaluated products. NIAP-evaluated products do not always meet agencies’ needs, which limit agencies’ acquisition and use of these products. A lack of vendor awareness of the NIAP evaluation process impacts the timely completion of the evaluation and validation of products. A reduction in the number of validators available to certify products could contribute to delays in validating products for agency use; and A lack of performance measures and difficulty in documenting the effectiveness of the NIAP process makes it difficult to demonstrate the program’s usefulness or improvements made to products’ security features and functions or improvements to vendors’ development processes. Collectively, these challenges hinder the effective use of the NIAP evaluation process by vendors and agencies. Meeting agency needs for NIAP-evaluated products for use in national security systems can be a challenge. According to agency responses to our survey, 10 of 18 agencies that purchased NIAP-evaluated products reported experiencing challenges in acquiring those products. Specifically, 10 agencies noted that products on the NIAP-evaluated product list were not the most current versions; and 7 agencies noted that products needed by their agency were not included on the NIAP-evaluated product list. Agencies also reported additional challenges for acquiring NIAP-- evaluated products. Specifically, choices for evaluated products are somewhat limited compared to the general product marketplace; and the length of time required for a product to complete the evaluation process can delay availability of the most up-to-date technology. However, opportunities exist to better match agency needs with the availability of NIAP-evaluated products: Agencies can write protection profiles to define the exact security parameter specifications that they need. For example, two of the vendors we visited stated that they had their products evaluated against the Controlled Access Protection Profile, which provides agencies with a set of security functional and assurance requirements for their IT products and also provides a level of protection against threats of inadvertent or casual attempts to breach the system security. Vendors can enter the evaluation process before their products are publicly released, which can allow consumers to acquire the most up-to- date technology. One vendor we visited had taken such a proactive approach. Agencies can use the NIAP-validated products list to identify products that meet their needs. Because the number of available NIAP-evaluated products is increasing, agencies now have a variety of products from which to choose. In January 2002, there were about 20 evaluated products. As of February 2006, there were 127 evaluated products and 142 products in evaluation. These evaluated products span across 26 categories of information assurance products and information assurance-enabled products from which to choose, including operating systems and firewalls. As products continue to enter evaluation, agencies’ needs may be better met. Vendors can, by participating in the NIAP Assurance Maintenance Program, maintain the validity of an evaluation when a product upgrades to its next version by either requesting a re-evaluation of the entire new product version or validation of only the changes in the product. Vendors’ participation in this program may allow agencies to have the most recent products available to them. Agencies can increase their selection of products through the Common Criteria Recognition Arrangement—available on the Common Criteria portal Web site—which currently has 247 evaluated products available. The products listed on the Web site give agencies more choices of products evaluated at evaluation assurance levels 4 and below. Another challenge faced by the NIAP process is the lack of vendor awareness regarding the requirements of the evaluation process. For example, vendors who are new to the evaluation process are not aware of the extensive documentation requirements. Creating documentation to meet evaluation requirements can be an expensive and time-consuming process. According to laboratory officials, about six months is the average time for vendors to complete the required documentation before test and evaluation can begin. However, if vendors consistently maintain their documentation, subsequent evaluations can be faster and less expensive since the vendor has previously produced the documentation and is already familiar with the process. Also, some vendors are not as active as others in the evaluation process, which can cause varying lengths of time for completing the evaluation. Vendors who are actively involved in the process are usually able to complete the process more quickly, including fixing flaws, than those who are not actively involved. According to one laboratory, the more active a vendor is in the evaluation process, the faster and less expensive it will be for the vendor. As such, the amount of involvement by the vendor during the process and the timeliness with which it fixes discovered flaws affects the length of time the product is in evaluation. Furthermore, some vendors and laboratories do not have the same perception of the length of time required to perform the evaluation. According to laboratory officials, the length of time needed for conducting product evaluations varies depending on the type of product being evaluated and the evaluation assurance level (see fig. 4). Vendors are often not aware of these requirements and tend to underestimate the length of time required for evaluations. Vendors and laboratories also perceive the length of evaluations differently because they punctuate start and end dates differently. Vendors measure the length of an evaluation from the day they decide to go into evaluation to the day they receive their product certificate. Their measurement includes selecting and negotiating with a laboratory, preparing required documentation, and testing the security features and functions. Laboratories, on the other hand, consider the length of an evaluation to be from the day they sign a contract with the vendor to the day they complete testing. While Common Criteria user forums for program participants have been held, which NIAP participated in, NIAP itself has not developed education and training workshops that focus on educating participants on specific requirements—such as the documentation requirements. These workshops could help ensure that vendors and laboratories are aware of the NIAP process and could contribute to the efficiency of product evaluations. NIAP officials acknowledge that such educational offerings could be beneficial. Over the last year, NIAP has seen a reduction in the number of qualified validators. NIAP officials stated that one of the most significant challenges the NIAP process faces is hiring and maintaining qualified personnel to validate products. In fiscal year 2005, the NIAP program lost approximately four government validators and six contractor validators. According to the NIAP Director, maintaining qualified personnel to perform validation tasks is difficult largely because many validators are nearing retirement age and the job is not an attractive position for recent college graduates. Validators have a complex job with tasks that span the entire evaluation process; they incrementally review the results of the various tests of functional and assurance requirements as they are completed by the laboratory. As a result, once validators are hired, it typically takes 12 to 24 months to train new validators to become proficient in performing validation tasks. If the NIAP program continues to see a reduction in validators, there could be an increased risk that a backlog of products needing to obtain NIAP certifications will develop, which could also impact the already lengthy evaluation process. The number of products entering evaluation is steadily increasing (in fiscal year 2002 there were approximately 20 products in evaluation and as of February 2006, there were 142 products in evaluation). Additionally, approximately five to seven products enter into evaluation each month. To address the widening gap between the number of products entering the process and the number of validators available to review products, NIAP intends to pursue legislation allowing it to recoup the costs of validations and hire additional staff. A best practice in public and private organizations is the use of performance measurements to gain insight into—and make adjustments to—the effectiveness and efficiency of programs, processes, and people. Performance measurement is a process of assessing progress toward achieving predetermined goals, and includes gathering information on the efficiency with which resources are transformed into goods and services, the quality of those outputs, and the effectiveness of government operations in terms of their specific contributions to program objectives. Establishing, updating, and collecting performance metrics to measure and track progress can assist organizations in determining whether they are fulfilling their vision and meeting their customer-focused strategic goals. The NIAP program lacks performance metrics to measure process effectiveness and thus faces difficulty in documenting its effectiveness. The program has not collected and analyzed data on the findings, flaws, and fixes resulting from product tests and evaluations. NIAP officials pointed out that nondisclosure agreements between laboratories and vendors make it difficult to collect and document such data. According to NIAP officials, there is existing laboratory information on findings, flaws, and fixes, but it has not been collected because of nondisclosure agreements. Nondisclosure agreements are important for protecting vendors’ proprietary data from being released to the public and competitors. However, releasing summary laboratory information on findings, flaws and fixes, while at the same time considering the requirements of nondisclosure agreements, could be beneficial to determining the effectiveness of the NIAP program. Without this type of information, NIAP will have difficulty demonstrating its effectiveness and will be challenged to know and to demonstrate whether the process is meeting its goals. While the National Security Telecommunications and Information Systems Security Policy Number 11 already allows agencies with non-national security systems to acquire NIAP-evaluated products, expanding the policy to mandate that such systems acquire NIAP-evaluated products may yield many of the same benefits and challenges experienced by current process participants, and could further exacerbate resources. For example, one identified benefit for national security systems—independent testing and evaluation of IT products—gives agencies confidence that validated features of a product, whether acquired for national or non-national security systems, will perform as claimed by the vendor. Similarly, one challenge—a reduction in the number of validators for certifying products—could contribute to delays in validating products, whether for national or non-nation security systems. Further, expanding the requirement to mandate the policy for non-national security systems may further exacerbate current resource constraints, related to hiring and maintaining qualified personnel to validate products. Nevertheless, agencies with non-national security systems have in fact acquired NIAP-evaluated products. Specifically, ten of the federal agencies we surveyed indicated that they have used the NIAP process to acquire evaluated products for non-national security systems, even though they are not required to do so. One agency is considering the use of NIAP-evaluated products during its product reviews, and is also considering including NIAP-evaluated products as part of its procurement strategy. Moreover, agencies seeking information assurance for their non-national security systems, but who do not acquire NIAP-evaluated products, have guidance and standards available to them. Specifically, as required by the Federal Information Security Management Act, NIST has developed and issued standards and guidelines, including minimum information security requirements, for the acquisition and use of security-related IT products for non-national security systems. These standards and guidelines are to be complementary with those established for the protection of national security systems and information contained in such systems. Further, NIST issued additional guidance to agencies for incorporating security into all phases of the system development life cycle process as a framework for selecting and acquiring cost-effective security controls. In August 2000, NIST also issued guidance on security assurance for non-national security systems in NIST Special Publication 800-23: Guideline to Federal Organizations on Security Assurance and Acquisition/Use of Tested/Evaluated Products. While a range of controls are needed to protect national security systems against increasingly sophisticated cyber attacks, establishing effective policies and processes for acquiring products that have been validated by an independent party is important to the federal government’s ability to procure and deploy the right technologies. Acquiring NIAP-evaluated products can increase the federal government’s confidence that its IT products and systems will perform security features and functions as claimed. Despite the benefits of acquiring and using IT products that have gone through the rigorous tests and evaluations of NIAP, the program faces considerable challenges that hinder its effective use by vendors and agencies. These challenges include the difficulty in matching agencies’ needs with the availability of NIAP-evaluated products, vendors’ lack of awareness regarding the evaluation process, a reduction in the number of validators to certify products, and difficulty in measuring and documenting the effectiveness of the NIAP process. Until these challenges are addressed, they will continue to undermine the efficacy of NIAP. Regarding expanding the NIAP requirement to non-national security systems, pursing this approach may further exacerbate current resource constraints. To assist the NIAP in documenting the effectiveness of the NIAP evaluation process, we recommend that the Secretary of Defense direct the Director of the National Security Agency, in coordination with NIST under the provisions of the NIAP partnership, to take the following two actions: 1. Coordinate with vendors, laboratories, and various industry associations that have knowledge of the evaluation process to develop awareness training workshops for program participants. 2. Consider collecting, analyzing, and reporting metrics on the effectiveness of NIAP tests and evaluations. Such metrics could include summary information on the number of findings, flaws, and associated fixes. In providing written comments on a draft of this report (reprinted in app. III), the Deputy Assistant Secretary of Defense (Deputy Chief Information Officer), partially agreed with one of our recommendations, agreed with the other, and described ongoing and planned efforts to address them. While the Deputy Assistant Secretary agreed with our recommendation to develop awareness training workshops for NIAP program participants, she stated that the NIAP must also live with the realities of the challenges that we identified in our report. The Deputy Assistant Secretary noted that, as our report highlights, the NIAP program is facing considerable challenges with resources and funding to sustain the current day-to-day running of the program and that it is not feasible for the NIAP office to increase its current efforts in developing and hosting the recommended training and education. Nonetheless, she also noted that the Secretary of Defense should direct the Director of the National Security Agency, in coordination with the NIST under the provisions of the NIAP, to coordinate with the vendors, laboratories, and various industry associations that have knowledge of the evaluation process to develop awareness training workshops for program participants within the current constraints and to work with the commercial laboratories, vendors, and others to identify ways that organizations outside of NIAP can further this initiative. We agree that NIAP should continue its efforts in awareness and education training, and endorse increasing such efforts as resources permit. The Deputy Assistant Secretary agreed with our recommendation to collect, analyze, and report metrics on the effectiveness of NIAP tests and evaluations, and stated that the NIAP has already started researching ways to institute metrics to help determine the effectiveness of the evaluation program. She noted that the goal of collecting metrics is to demonstrate to the NIAP constituency that NIAP evaluations do provide value by improving the security of the evaluated products and by providing the end customer with assurance that these products perform their security functions as intended even when faced with adverse conditions. The Department of Defense and the Department of Homeland Security also provided technical comments, which we considered and addressed in our report, as appropriate. We are sending copies of this report to the Departments of Commerce (National Institute of Standards and Technology), Defense, and Homeland Security; the Office of Management and Budget; the General Services Administration, and to other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to identify (1) the governmentwide benefits and challenges of the National Information Assurance Partnership (NIAP) evaluation process; and (2) the potential benefits and challenges of expanding the requirement of NIAP to non-national security systems, including sensitive but unclassified systems. To determine the benefits and challenges for both objectives, we analyzed and reviewed a number of policy documents and reports from both industry and government. We also reviewed relevant federal policies relating to information security issues. To gain insight into the NIAP evaluation process, we met with software vendors and certification laboratories to discuss their experiences with NIAP, their applicable processes, and reviewed their relevant documentation. We selected vendors based on broad or distinguishing product capabilities demonstrating a range of features, brand recognition based on high ratings received in reviews conducted by information security magazines, and vendors mentioned more frequently in various discussions with industry experts and in information security literature. Vendors selected represented different information technology (IT) market sectors, are considered leaders in their field, and varied in size. To determine the industrywide perspective on NIAP, we met with two IT industry groups: The Information Technology Association of America and Cyber Security Industry Alliance. We selected these industry groups because they represent a cross-section of the IT industry as a whole. To gain insight into the program’s functions and usefulness to agencies, we spoke with government officials from the Department of Commerce (specifically the National Institute of Standards and Technology), Department of Defense, Department of Homeland Security, General Services Administration, and the Office of Management and Budget. We also surveyed officials from the 24 federal agencies designated under the Chief Financial Officers Act of 1990 to determine their current use of NIAP- evaluated products, the perceived usefulness of the program, and the benefits and challenges associated with acquiring and using NIAP- evaluated products. For each agency survey, we identified the office of the chief information officer, notified them of our work, and distributed the survey instrument to each via an e-mail attachment. In addition, we discussed the purpose and content of the survey instrument with agency officials when requested. All 24 agencies responded to our survey. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that agencies provided to validate their responses. We contacted agency officials when necessary for follow-up information. We then analyzed the agencies’ responses. Although this was not a sample survey, and, therefore, there were no sampling errors, conducting any survey may introduce other kinds of errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database (or were analyzed) can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these survey-related errors. For example, we developed the questionnaire in two stages. First, we had a survey specialist design the survey instrument in collaboration with subject-matter experts. Then, we pretested the instrument at two federal departments and internally at GAO to ensure that questions were relevant, clearly stated, and easy to answer. We conducted our work in Washington, D.C., from May 2005 through February 2006, in accordance with generally accepted government auditing standards. In addition to the individual named above, Jenniffer Wilson (Assistant Director), Neil Doherty, Jennifer Franks, Joel Grossman, Matthew Grote, Min Hyun, Anjalique Lawrence, J. Paul Nicholas, Karen Talley, and Amos Tevelow were key contributors to this report.
In 1997, the National Security Agency and the National Institute of Standards and Technology formed the National Information Assurance Partnership (NIAP) to boost federal agencies' and consumers' confidence in information security products manufactured by vendors. To facilitate this goal, NIAP developed a national program that requires accredited laboratories to independently evaluate and validate the security of these products for use in national security systems. These systems are those under control of the U.S. government that contain classified information or involve intelligence activities. GAO was asked to identify (1) the governmentwide benefits and challenges of the NIAP evaluation process on national security systems, and (2) the potential benefits and challenges of expanding the requirement of NIAP to non-national security systems, including sensitive but unclassified systems. While NIAP process participants--vendors, laboratories, and federal agencies--indicated that the process offers benefits for use in national security systems, its effectiveness has not been measured or documented, and considerable challenges to acquiring and using NIAP-evaluated products exist. Specific benefits included independent testing and evaluation of products and accreditation of the performing laboratories, the discovery and correction of product flaws, and improvements to vendor development processes. However, process participants also face several challenges, including difficulty in matching agencies' needs with the availability of NIAP-evaluated products, vendors' lack of awareness regarding the evaluation process, and a lack of performance measures and difficulty in documenting the effectiveness of the NIAP evaluation process. Collectively, these challenges hinder the effective use of the NIAP evaluation process by vendors and agencies. Expanding the requirement of the NIAP evaluation process to non-national security systems is likely to yield similar benefits and challenges as those experienced by current process participants. For example, a current benefit--independent testing and evaluation of IT products--gives agencies confidence that validated features of a product will perform as claimed by the vendor. However, federal policy already allows agencies with non-national security systems to consider acquiring NIAP-evaluated products for those systems, and requiring that they do so may further exacerbate current resource constraints related to the evaluation and validation of products. In the absence of such a requirement, agencies seeking information assurance (measures that defend and protect information and information systems by ensuring their confidentiality, integrity, authenticity, availability, and utility) for their non-national security systems have other federal guidance and standards available to them.
The DHS Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003. The Chief Privacy Officer is appointed by the Secretary and reports directly to him. The Chief Privacy Officer serves as the designated senior agency official for privacy, as has been required by the Office of Management and Budget (OMB) of all major departments and agencies since 2005. As a part of the DHS organizational structure, the Chief Privacy Officer has the ability to serve as a consultant on privacy issues to other departmental entities that may not have adequate expertise on privacy issues. In addition, there are also component-level and program-level privacy officers at the Transportation Security Administration (TSA), U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, and U.S. Citizenship and Immigration Services. When the Privacy Office was initially established, it had 5 full-time employees, including the Chief Privacy Officer. Since then, the staff has expanded to 16 full-time employees. As of February 2007, the Privacy Office also had 9 full-time and 3 half-time contractor staff. The first Chief Privacy Officer served from April 2003 to September 2005, followed by an Acting Chief Privacy Officer who served through July 2006. In July 2006, the Secretary appointed a second permanent Chief Privacy Officer. The Privacy Office is responsible for ensuring that DHS is in compliance with federal laws that govern the use of personal information by the federal government. Among these laws are the Homeland Security Act of 2002 (as amended by the Intelligence Reform and Terrorism Prevention Act of 2004), the Privacy Act of 1974, and the E-Gov Act of 2002. Based on these laws, the Privacy Office’s major responsibilities can be summarized into these four broad categories: 1. reviewing and approving PIAs, 2. integrating privacy considerations into DHS decision making, 3. reviewing and approving public notices required by the Privacy Act, 4. preparing and issuing reports. The Privacy Office is responsible for ensuring departmental compliance with the privacy provisions of the E-Gov Act. Specifically, section 208 of the E-Gov Act is designed to enhance protection of personally identifiable information in government information systems and information collections by requiring that agencies conduct PIAs. In addition, the Homeland Security Act requires the Chief Privacy Officer to conduct a PIA for proposed rules of the department on the privacy of personal information. According to OMB guidance, a PIA is an analysis of how information is handled: (1) to ensure that handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) to determine the risks and effects of collecting, maintaining, and disseminating personally identifiable information in an electronic information system; and (3) to examine and evaluate protections and alternative processes for handling information to mitigate potential risks to privacy. Agencies must conduct PIAs before they (1) develop or procure information technology that collects, maintains, or disseminates personally identifiable information or (2) initiate any new data collections of personal information that will be collected, maintained, or disseminated using information technology—if the same questions are asked of 10 or more people. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as what information will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. Integrating privacy considerations into the DHS decision-making process Several of the Privacy Office’s statutory responsibilities involve ensuring that the major decisions and operations of the department do not have an adverse impact on privacy. Specifically, the Homeland Security Act requires that the Privacy Office assure that the use of technologies by the department sustains, and does not erode, privacy protections relating to the use, collection, and disclosure of personal information. The act further requires that the Privacy Office evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government. It also requires the office to coordinate with the DHS Officer for Civil Rights and Civil Liberties on those issues. Reviewing and approving public notices required by the Privacy Act The Privacy Office is required by the Homeland Security Act to assure that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personally identifiable information that is maintained in their systems of records. The act defines a record as any item, collection, or grouping of information about an individual that is maintained by an agency and contains that individual’s name or other personal identifier, such as a Social Security number. It defines “system-of- records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires agencies to notify the public, via a notice in the Federal Register, when they create or modify a system-of- records notice. This notice must include information such as the type of information collected, the types of individuals about whom information is collected, the intended “routine” uses of the information, and procedures that individuals can use to review and correct their personal information. The act also requires agencies to define—and limit themselves to— specific purposes for collecting the information. The Homeland Security Act requires the Privacy Office to prepare annual reports to Congress detailing the department’s activities affecting privacy, including complaints of privacy violations and implementation of the Privacy Act of 1974. In addition to the reporting requirements under the Homeland Security Act, Congress has occasionally directed the Privacy Office to report on specific technologies and programs. For example, in the conference report for the DHS appropriations act for fiscal year 2005, Congress directed the Privacy Office to report on DHS’s use of data mining technologies. The Intelligence Reform and Terrorism Prevention Act of 2004 also required the Chief Privacy Officer to submit a report to Congress on the impact on privacy and civil liberties of the DHS-maintained Automatic Selectee and No-Fly lists, which contain names of potential airline passengers who are to be selected for secondary screening or not allowed to board aircraft. In addition, the Privacy Office can initiate its own investigations and produce reports under its Homeland Security Act authority to report on complaints of privacy violations and assure technologies sustain and do not erode privacy protections. One of the Privacy Office’s primary responsibilities is to review and approve PIAs to ensure departmental compliance with the privacy provisions (section 208) of the E-Gov Act of 2002. The Privacy Office has established a PIA compliance framework to carry out this responsibility. The centerpiece of the Privacy Office’s compliance framework is its written guidance on when a PIA must be conducted, how the associated analysis should be performed, and how the final document should be written. Although based on OMB’s guidance, the Privacy Office’s guidance goes further in several areas. For example, the guidance does not exempt national security systems and also clarifies that systems in the pilot testing phase are not exempt. The DHS guidance also provides more detailed instructions than OMB’s guidance on the level of detail to be provided. For example, the DHS guidance requires a discussion of a system’s data retention period, procedures for allowing individual access, redress, correction of information, and technologies used in the system, such as biometrics or radio frequency identification (RFID). The Privacy Office has taken steps to continually improve its PIA guidance. Initially released in February 2004, the guidance has been updated each year since then. These updates have increased the emphasis on describing the privacy analysis that should take place in making system design decisions that affect privacy. For example, regarding information collection, the latest guidance requires program officials to explain how the collection supports the purpose(s) of the system or program and the mission of the organization. The guidance also reminds agencies that the information collected should be relevant and necessary to accomplish the stated purpose(s) and mission. To accompany its written guidance, the Privacy Office has also developed a PIA template and conducted a number of training sessions to further assist DHS personnel. Our analysis of published DHS PIAs shows significant quality improvements in those completed recently compared with those from 2 or 3 years ago. Overall, there is a greater emphasis on analysis of system development decisions that impact privacy, because the guidance now requires that such analysis be performed and described. For example, the most recent PIAs include assessments of planned uses of the system and information, plans for data retention, and the extent to which the information is to be shared outside of DHS. Earlier PIAs did not include any of these analyses. The emphasis on analysis should allow the public to more easily understand a system and its impact on privacy. Further, our analysis found that use of the template has resulted in a more standardized structure, format, and content, making the PIAs more easily understandable to the general reader. In addition to written guidance, the Privacy Office has also taken steps to integrate PIA development into the department’s established operational processes. For example, the Privacy Office is using the OMB Exhibit 300 budget process as an opportunity to ensure that systems containing personal information are identified and that PIAs are conducted when needed. OMB requires agencies to submit an Exhibit 300 Capital Asset Plan and Business Case for their major information technology systems in order to receive funding. The Exhibit 300 template asks whether a system has a PIA and if it is publicly available. Because the Privacy Office gives final departmental approval for all such assessments, it is able to use the Exhibit 300 process to ensure the assessments are completed. According to Privacy Office officials, the threat of losing funds has helped to encourage components to conduct PIAs. Integration of the PIA requirement into these management processes is beneficial in that it provides an opportunity to address privacy considerations during systems development, as envisioned by OMB’s guidance. Because of concerns expressed by component officials that the Privacy Office’s review process takes a long time and is difficult to understand, the office has made efforts to improve the process and make it more transparent to DHS components. Specifically, the office has established a five-stage review process. Under this process, a PIA must satisfy all the requirements of a given stage before it can progress to the next one. The review process is intended to take 5 to 6 weeks, with each stage intended to take 1 week. Figure 1 illustrates the stages of the review process. Through efforts such as the compliance framework, the Privacy Office has steadily increased the number of PIAs it has approved and published each year. Since 2004, PIA output by the Privacy Office has more than doubled. According to Privacy Office officials, the increase in output was aided by the development and implementation of the Privacy Office’s structured guidance and review process. In addition, Privacy Office officials stated that as DHS components gain more experience, the output should continue to increase. Because the Privacy Office has focused departmental attention on the development and review process and established a structured framework for identifying systems that need PIAs, the number of identified DHS systems requiring a PIA has increased dramatically. According to its annual Federal Information Security Management Act reports, DHS identified 46 systems as requiring a PIA in fiscal year 2005 and 143 systems in fiscal year 2006. Based on the privacy threshold analysis process, the Privacy Office estimates that 188 systems will require a PIA in fiscal year 2007. Considering that only 25 were published in fiscal year 2006, it will likely be very difficult for DHS to expeditiously develop and issue PIAs for all of these systems because developing and approving them can be a lengthy process. According to estimates by Privacy Office officials, it takes approximately six months to develop and approve a PIA, but the office is working to reduce this time. The Privacy Office is examining several potential changes to the development process that would allow it to process an increased number of PIAs. One such option is to allow DHS components to quickly amend preexisting PIAs. An amendment would only need to contain information on changes to the system and would allow for quicker development and review. The Privacy Office is also considering developing standardized PIAs for commonly-used types of systems or uses. For example, such an assessment may be developed for local area networks. Systems intended to collect or use information outside what is specified in the standardized PIA would need approval from the Privacy Office. The Privacy Office has also taken steps to integrate privacy considerations in the DHS decision-making process. These actions are intended to address a number of statutory requirements, including that the Privacy Office assure that the use of technologies sustain, and do not erode, privacy protections; that it evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government; and that it coordinate with the DHS Officer for Civil Rights and Civil Liberties. For example, in 2004, the first Chief Privacy Officer established the DHS Data Privacy and Integrity Advisory Committee to advise her and the Secretary on issues within the department that affect individual privacy, as well as data integrity, interoperability, and other privacy-related issues. The committee has examined a variety of privacy issues, produced reports, and made recommendations. In December 2006, the committee adopted two reports; one on the use of RFID for identity verification and another on the use of commercial data. According to Privacy Office officials, the additional instructions on the use of commercial data contained in the May 2007 PIA guidance update were based, in part, on the advisory committee’s report on commercial data. In addition to its reports, which are publicly available, the committee meets quarterly in Washington, D.C., and in other parts of the country where DHS programs operate. These meetings are open to the public and transcripts of the meetings are posted on the Privacy Office’s Web site. DHS officials from major programs and initiatives involving the use of personal data such as US-VISIT, Secure Flight, and the Western Hemisphere Travel Initiative, have testified before the committee. Private sector officials have also testified on topics such as data integrity, identity authentication, and RFID. Because the committee is made up of experts from the private sector and the academic community, it brings an outside perspective to privacy issues through its reports and recommendations. In addition, because it was established as a federal advisory committee, its products and proceedings are publicly available and thus provide a public forum for the analysis of privacy issues that affect DHS operations. The Privacy Office has also taken steps to raise awareness of privacy issues by holding a series of public workshops. The first workshop, on the use of commercial data for homeland security, was held in September 2005. Panel participants consisted of representatives from academia, the private sector, and government. In April 2006, a second workshop addressed the concept of public notices and freedom of information frameworks. In June 2006, a workshop was held on the policy, legal, and operational frameworks for PIAs and privacy threshold analyses and included a tutorial for conducting PIAs. Hosting public workshops is beneficial in that it allows for communication between the Privacy Office and those who may be affected by DHS programs, including the privacy advocacy community and the general public. Another part of the Privacy Office’s efforts to carry out its Homeland Security Act requirements is its participation in departmental policy development for initiatives that have a potential impact on privacy. The Privacy Office has been involved in policy discussions related to several major DHS initiatives and, according to department officials, the office has provided input on several privacy-related decisions. The following are major initiatives in which the Privacy Office has participated. Passenger name record negotiations with the European Union United States law requires airlines operating flights to or from the United States to provide the Bureau of Customs and Border Protection (CBP) with certain passenger reservation information for purposes of combating terrorism and other serious criminal offenses. In May 2004, an international agreement on the processing of this information was signed by DHS and the European Union. Prior to the agreement, CBP established a set of terms for acquiring and protecting data on European Union citizens, referred to as the “Undertakings”. In September 2005, under the direction of the first Chief Privacy Officer, the Privacy Office issued a report on CBP’s compliance with the Undertakings in which it provided guidance on necessary compliance measures and also required certain remediation steps. For example, the Privacy Office required CBP to review and delete data outside the 34 data elements permitted by the agreement. According to the report, the deletion of these extraneous elements was completed in August 2005 and was verified by the Privacy Office. In October 2006, DHS and the European Union completed negotiations on a new interim agreement concerning the transfer and processing of passenger reservation information. The Director of International Privacy Policy within the Privacy Office participated in these negotiations along with others from DHS in the Policy Office, Office of General Counsel, and CBP. The Western Hemisphere Travel Initiative is a joint effort between DHS and the Department of State to implement new documentation requirements for certain U.S. citizens and nonimmigrant aliens entering the United States. DHS and State have proposed the creation of a special identification card that would serve as an alternative to a traditional passport for use by U.S. citizens who cross land borders or travel by sea between the United States, Canada, Mexico, the Caribbean, or Bermuda. The card is to use a technology called vicinity RFID to transmit information on travelers to CBP officers at land and sea ports of entry. Advocacy groups have raised concerns about the proposed use of vicinity RFID because of privacy and security risks due primarily to the ability to read information from these cards from distances of up to 20 feet. The Privacy Office was consulted on the choice of identification technology for the cards. According to the DHS Policy Office, Privacy Office input led to a decision not to store or transmit personally identifiable information on the RFID chip on the card. Instead, DHS is planning on transmitting a randomly-generated identifier for individuals, which is to be used by DHS to retrieve information about the individual from a centralized database. REAL ID Act of 2005 Among other things, the REAL ID Act requires DHS to consult with the Department of Transportation and the states in issuing regulations that set minimum standards for state-issued REAL ID drivers’ licenses and identification cards to be accepted for official purposes after May 11, 2008. Advocacy groups have raised a number of privacy concerns about REAL ID, chiefly that it creates a de facto national ID that could be used in the future for privacy-infringing purposes and that it puts individuals at increased risk of identity theft. The DHS Policy Office reported that it included Privacy Office officials, as well as officials from the Office of Civil Rights and Civil Liberties, in developing its implementing rule for REAL ID. The Privacy Office’s participation in REAL ID also served to address its requirement to evaluate legislative and regulatory proposals concerning the collection, use, and disclosure of personal information by the federal government. According to its November 2006 annual report, the Privacy Office championed the need for privacy protections regarding the collection and use of the personal information that will be stored on the REAL ID drivers’ licenses. Further, the office reported that it funded a contract to examine the creation of a state federation to implement the information sharing required by the act in a privacy-sensitive manner. As we have previously reported, DHS has used personal information obtained from commercial data providers for immigration, fraud detection, and border screening programs but, like other agencies, does not have policies in place concerning its uses of these data. Accordingly, we recommended that DHS, as well as other agencies, develop such policies. In response to the concerns raised in our report and by privacy advocacy groups, Privacy Office officials said they were drafting a departmentwide policy on the use of commercial data. Once drafted by the Privacy Office, this policy is to undergo a departmental review process (including review by the Policy Office, General Counsel, and Office of the Secretary), followed by a review by OMB prior to adoption. These examples demonstrate specific involvement of the Privacy Office in major DHS initiatives. However, Privacy Office input is only one factor that DHS officials consider in formulating decisions about major programs, and Privacy Office participation does not guarantee that privacy concerns will be fully addressed. For example, our previous work has highlighted problems in implementing privacy protections in specific DHS programs, including Secure Flight and the ADVISE program. Nevertheless, the Privacy Office’s participation in policy decisions provides an opportunity for privacy concerns to be raised explicitly and considered in the development of DHS policies. The Privacy Office has also taken steps to address its mandate to coordinate with the DHS Officer for Civil Rights and Civil Liberties on programs, policies, and procedures that involve civil rights, civil liberties, and privacy considerations, and ensure that Congress receives appropriate reports. The DHS Officer for Civil Rights and Civil Liberties cited three specific instances where the offices have collaborated. First, as stated previously, both offices have participated in the working group involved in drafting the implementing regulations for REAL ID. Second, the two offices coordinated in preparing the Privacy Office’s report to Congress assessing the privacy and civil liberties impact of the No-Fly and Selectee lists used by DHS for passenger prescreening. Third, the two offices coordinated on providing input for the “One-Stop Redress” initiative, a joint initiative between the Department of State and DHS to implement a streamlined redress center for travelers who have concerns about their treatment in the screening process. The DHS Privacy Office is responsible for reviewing and approving DHS system-of-records notices to ensure that the department complies with the Privacy Act of 1974. Specifically, the Homeland Security Act requires the Privacy Office to “assur that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974.” The Privacy Act requires that federal agencies publish notices in the Federal Register on the establishment or revision of systems of records. These notices must describe the nature of a system-of-records and the information it maintains. Additionally, OMB has issued various guidance documents for implementing the Privacy Act. OMB Circular A-130, for example, outlines agency responsibilities for maintaining records on individuals and directs government agencies to conduct biennial reviews of each system-of- records notice to ensure that it accurately describes the system-of- records. The Privacy Office has taken steps to establish a departmental process for complying with the Privacy Act. It issued a management directive that outlines its own responsibilities as well as those of component-level officials. Under this policy, the Privacy Office is to act as the department’s representative for matters relating to the Privacy Act. The Privacy Office is to issue and revise, as needed, departmental regulations implementing the Privacy Act and approve all system-of-records notices before they are published in the Federal Register. DHS components are responsible for drafting system-of-records notices and submitting them to the Privacy Office for review and approval. The management directive was in addition to system-of-records notice guidance published by the Privacy Office in August 2005. The guidance discusses the requirements of the Privacy Act and provides instructions on how to prepare system-of-records notices by listing key elements and explaining how they must be addressed. The guidance also lists common routine uses and provides standard language that DHS components may incorporate into their notices. As of February 2007, the Privacy Office had approved and published 56 system-of-records notices, including updates and revisions as well as new documents. However, the Privacy Office has not yet established a process for conducting a biennial review of system-of-records notices, as required by OMB. OMB Circular A-130 directs federal agencies to review their notices biennially to ensure that they accurately describe all systems of records. Where changes are needed, the agencies are to publish amended notices in the Federal Register. The establishment of DHS involved the consolidation of a number of preexisting agencies, thus, there are a substantial number of systems that are operating under preexisting, or “legacy,” system-of-records notices— 218, as of February 2007. These documents may not reflect changes that have occurred since they were prepared. For example, the system-of- records notice for the Treasury Enforcement and Communication System has not been updated to reflect changes in how personal information is used that has occurred since the system was taken over by DHS from the Department of the Treasury. The Privacy Office acknowledges that identifying, coordinating, and updating legacy system-of-records notices is the biggest challenge it faces in ensuring DHS compliance with the Privacy Act. Because it focused its initial efforts on PIAs and gave priority to DHS systems of records that were not covered by preexisting notices, the office did not give the same priority to performing a comprehensive review of existing notices. According to Privacy Office officials, the office is encouraging DHS components to update legacy system-of-records notices and is developing new guidance intended to be more closely integrated with its PIA guidance. However, no significant reduction has yet been made in the number of legacy system-of-records notices that need to be updated. By not reviewing notices biennially, the department is not in compliance with OMB direction. Further, by not keeping its notices up-to-date, DHS hinders the public’s ability to understand the nature of DHS systems-of- records notices and how their personal information is being used and protected. Inaccurate system-of-records notices may make it difficult for individuals to determine whether their information is being used in a way that is incompatible with the purpose for which it was originally collected. Section 222 of the Homeland Security Act requires that the Privacy Officer report annually to Congress on “activities of the Department that affect privacy, including complaints of privacy violations, implementation of the Privacy Act of 1974, internal controls, and other matters.” The act does not prescribe a deadline for submission of these reports; however, the requirement to report “on an annual basis” suggests that each report should cover a 1-year time period and that subsequent annual reports should be provided to Congress 1 year after the previous report was submitted. Congress has also required that the Privacy Office report on specific departmental activities and programs, including data mining and passenger prescreening programs. In addition, the first Chief Privacy Officer initiated several investigations and prepared reports on them to address requirements to report on complaints of privacy violations and to assure that technologies sustain and do not erode privacy protections. In addition to satisfying legal requirements, the issuance of timely public reports helps in adhering to the fair information practices, which the Privacy Office has pledged to support. Public reports address openness— the principle that the public should be informed about privacy policies and practices and that individuals should have a ready means of learning about the use of personal information—and the accountability principle—that individuals controlling the collection or use of personal information should be accountable for taking steps to ensure implementation of the fair information principles. The Privacy Office has not been timely and in one case has been incomplete in addressing its requirement to report annually to Congress. The Privacy Office’s first annual report, issued in February 2005, covered 14 months from April 2003 through June 2004. A second annual report, for the next 12 months, was never issued. Instead, information about that period was combined with information about the next 12-month period, and a single report was issued in November 2006 covering the office’s activities from July 2004 through July 2006. While this report generally addressed the content specified by the Homeland Security Act, it did not include the required description of complaints of privacy violations. Other reports produced by the Privacy Office have not met statutory deadlines or have been issued long after privacy concerns had been addressed. For example, although Congress required a report on the privacy and civil liberties effects of the No-Fly and Automatic Selectee Lists by June 2005, the report was not issued until April 2006, nearly a year late. In addition, although required by December 2005, the Privacy Office’s report on DHS data mining activities was not provided to Congress until July 2006 and was not made available to the public on the Privacy Office Web site until November 2006. In addition, the first Chief Privacy Officer initiated four investigations of specific programs and produced reports on these reviews. Although two of the four reports were issued in a relatively timely fashion, the other two reports were issued long after privacy concerns had been raised and addressed. For example, a report on the Multi-state Anti-Terrorism Information Exchange program, initiated in response to a complaint by the American Civil Liberties Union submitted in May 2004, was not issued until two and a half years later, long after the program had been terminated. As another example, although drafts of the recommendations contained in the Secure Flight report were shared with TSA staff as early as summer 2005, the report was not released until December 2006, nearly a year and a half later. According to Privacy Office officials, there are a number of factors contributing to the delayed release of its reports, including time required to consult with affected DHS components as well as the departmental clearance process, which includes the Policy Office, the Office of General Counsel, and the Office of the Secretary. After that, drafts must be sent to OMB for further review. In addition, the Privacy Office did not establish schedules for completing these reports that took into account the time needed for coordination with components or departmental and OMB review. Regarding the omission of complaints of privacy violations in the latest annual report, Privacy Office officials noted that the report cites previous reports on Secure Flight and the Multi-state Anti-Terrorism Information Exchange program, which were initiated in response to alleged privacy violations, and that during the time period in question there were no additional complaints of privacy violations. However, the report itself provides no specific statements about the status of privacy complaints; it does not state that there were no privacy complaints received. Late issuance of reports has a number of negative consequences beyond noncompliance with mandated deadlines. First, the value these reports are intended to provide is reduced when the information contained is no longer timely or relevant. In addition, since these reports serve as a critical window into the operations of the Privacy Office and on DHS programs that make use of personal information, not issuing them in a timely fashion diminishes the office’s credibility and can raise questions about the extent to which the office is receiving executive-level attention. For example, delays in releasing the most recent annual report led a number of privacy advocates to question whether the Privacy Office had adequate authority and executive-level support. Congress also voiced this concern in passing the Department of Homeland Security Appropriations Act of 2007, which states that none of the funds made available in the act may be used by any person other than the Privacy Officer to “alter, direct that changes be made to, delay, or prohibit the transmission to Congress” of its annual report. In addition, on January 5, 2007, legislation was introduced entitled “Privacy Officer with Enhanced Rights Act of 2007”. This bill, among other things, would provide the Privacy Officer with the authority to report directly to Congress without prior comment or amendment by either OMB or DHS officials who are outside the Privacy Office. Until its reports are issued in a timely fashion, questions about the credibility and authority of the Privacy Office will likely remain. In order to ensure that Privacy Act notices reflect current DHS activities and to help the Privacy Office meet its obligations and issue reports in a timely manner, in our report we recommended that the Secretary of Homeland Security take the following four actions: 1. Designate full-time privacy officers at key DHS components, such as Customs and Border Protection, the U.S. Coast Guard, Immigration and Customs Enforcement, and the Federal Emergency Management Agency. 2. Implement a department-wide process for the biennial review of system-of-records notices, as required by OMB. 3. Establish a schedule for the timely issuance of Privacy Office reports (including annual reports), which appropriately consider all aspects of report development, including departmental clearance. 4. Ensure that the Privacy Office’s annual reports to Congress contain a specific discussion of complaints of privacy violations, as required by law. Concerning our recommendation that it designate full-time privacy officers in key departmental components, DHS noted in comments on a draft of our report that the recommendation was consistent with a departmental management directive on compliance with the Privacy Act and stated that it would take the recommendation “under advisement.” However, according to Privacy Office officials, as of July 2007, no such designations have been made. Until DHS appoints such officers, the Privacy Office will not benefit from their potential to help speed the processing of PIAs, nor will component programs be in a position to benefit from the privacy expertise these officials could provide. DHS concurred with the other three recommendations and noted actions initiated to address them. Specifically, regarding our recommendation that DHS implement a process for the biennial review of system-of-records notices required by OMB, DHS noted that it is systematically reviewing legacy system-of-records notices in order to issue updated notices on a schedule that gives priority to systems with the most sensitive personally identifiable information. DHS also noted that the Privacy Office is to issue an updated system-of-records notice guide by the end of fiscal year 2007. As of July 2007, DHS officials reported that they have 215 legacy SORNs that need to be reviewed and either revised or retired. Until DHS reviews and updates all of its legacy notices as required by federal guidance, it cannot assure the public that its notices reflect current uses and protections of personal information. Concerning our recommendations related to timely reporting, DHS stated that the Privacy Office will work with necessary components and programs affected by its reports to provide for both full collaboration and coordination within DHS. Finally, regarding our recommendation that the Privacy Office’s annual reports contain a specific discussion of privacy complaints, as required by law, DHS agreed that a consolidated reporting structure for privacy complaints within the annual report would assist in assuring Congress and the public that the Privacy Office is addressing the complaints that it receives. In summary, the DHS Privacy Office has made significant progress in implementing its statutory responsibilities under the Homeland Security Act; however, more work remains to be accomplished. The office has made great strides in implementing a process for developing PIAs, contributing to greater output over time and higher quality assessments. The Privacy Office has also provided the opportunity for privacy to be considered at key stages in systems development by incorporating PIA requirements into existing management processes. The office faces continuing challenges in reducing its backlog of systems requiring PIAs, ensuring that system-of-records notices are kept up to date, and in issuing reports in a timely fashion. Mr. Chairman, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6240, or koontzl@gao.gov. Other individuals who made key contributions include John de Ferrari, Nancy Glover, Anthony Molet, David Plocher, and Jamie Pressman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003, as required by the Homeland Security Act of 2002. The Privacy Office's major responsibilities include: (1) reviewing and approving privacy impact assessments (PIA)--analyses of how personal information is managed in a federal system, (2) integrating privacy considerations into DHS decision making and ensuring compliance with the Privacy Act of 1974, and (3) preparing and issuing annual reports and reports on key privacy concerns. GAO was asked to testify on its recent report examining progress made by the DHS Privacy Office in carrying out its statutory responsibilities. GAO compared statutory requirements with Privacy Office processes, documents, and activities. The DHS Privacy Office has made significant progress in carrying out its statutory responsibilities under the Homeland Security Act and its related role in ensuring compliance with the Privacy Act of 1974 and E-Government Act of 2002, but more work remains to be accomplished. Specifically, the Privacy Office has established a compliance framework for conducting PIAs, which are required by the E-Gov Act. The framework includes formal written guidance, training sessions, and a process for identifying systems requiring such assessments. The framework has contributed to an increase in the quality and number of PIAs issued as well as the identification of many more affected systems. The resultant workload is likely to prove difficult to process in a timely manner. Designating privacy officers in certain DHS components could help speed processing of PIAs, but DHS has not yet taken action to make these designations. The Privacy Office has also taken actions to integrate privacy considerations into the DHS decision-making process by establishing an advisory committee, holding public workshops, and participating in policy development. However, limited progress has been made in one aspect of ensuring compliance with the Privacy Act--updating public notices for systems of records that were in existence prior to the creation of DHS. These notices should identify, among other things, the type of data collected, the types of individuals about whom information is collected, and the intended uses of the data. Until the notices are brought up-to-date, the department cannot assure the public that the notices reflect current uses and protections of personal information. Further, the Privacy Office has generally not been timely in issuing public reports. For example, a report on the Multi-state Anti-Terrorism Information Exchange program--a pilot project for law enforcement sharing of public records data--was not issued until long after the program had been terminated. Late issuance of reports has a number of negative consequences, including a potential reduction in the reports' value and erosion of the office's credibility.
The Federal Reserve System was created by the Federal Reserve Act in 1913 as the central bank of the United States to provide a safe and flexible banking and monetary system. The System is composed primarily of 12 FRBs with 25 branches (organized into 12 districts), the Federal Open Market Committee, and the Federal Reserve Board, which exercises broad supervisory powers over the FRBs. The primary functions of the Federal Reserve System are to (1) conduct the nation’s monetary policy by influencing bank reserves and interest rates, (2) administer the nation’s currency in circulation, (3) buy or sell foreign currencies to maintain stability in international currency markets, (4) provide financial services such as check clearing and electronic funds transfer to the public, financial institutions, and foreign official institutions, (5) regulate the foreign activities of all U.S. banks and the domestic activities of foreign banks, and (6) supervise bank holding companies and state chartered banks that are members of the System. The FRBs also provide various financial services to the U.S. government, including the administration of Treasury securities. The FRBs’ assets are comprised primarily of investments in U.S. Treasury and agency securities. As of December 31, 1994, the FRBs reported a securities portfolio balance of $379 billion (87 percent of total assets). These securities primarily consist of Treasury bills, Treasury notes, and Treasury bonds that the FRBs buy and sell when conducting monetary policy. The FRBs act as Treasury’s fiscal agent by creating Treasury securities in electronic (book-entry) form upon authorization by the U.S. Treasury and administering ongoing principal and interest payments on these securities. Treasury securities are maintained on electronic recordkeeping systems operated and controlled by the FRBs. The U.S. Treasury maintains an independent record of total Treasury securities outstanding but not individual ownership records. The FRBs maintain records of securities held by depository institutions, by the central banks of other countries, and which they hold for their own account. These records do not indicate whether securities held by the depository institutions are for their own accounts or on behalf of their customers. The portion of these securities owned by the FRBs is maintained on recordkeeping systems that the New York FRB operates. A security’s historical cost is comprised of the security’s face value (par) and any difference between this face value and the security’s purchase price. These differences are referred to as premiums when the purchase price is higher than the face value and as discounts when the price is less than the face value. These amounts are reduced over the life of the security to adjust interest income. Federal Reserve notes are the primary paper currency of the United States in circulation and the FRBs’ largest liability. As of December 31, 1994, the FRBs reported a Federal Reserve note balance of $382 billion (89 percent of total liabilities). Notes are printed by the U.S. Treasury’s Bureau of Engraving and Printing and shipped to the FRBs, who store them in their vaults until they are withdrawn by financial institutions. Notes do not mature or expire and are liabilities of the FRBs until they are returned to the FRBs. The amount the FRBs report as their liabilities for outstanding notes is actually a running balance of all notes issued from inception that have not been returned to the FRBs. The Federal Reserve Act designates certain assets of each FRB as eligible collateral for the reported Federal Reserve note liability. The majority of the assets pledged as collateral are each FRB’s Treasury securities. In addition, the FRBs have entered into cross-collateralization agreements under whose terms the assets pledged as collateral to secure each FRB’s notes are also pledged to secure the notes of all the FRBs. Therefore, as long as total collateral assets held by the FRBs equal or exceed the FRBs’ total liabilities for notes, the note liability of each individual FRB is fully secured. To conduct our work, we (1) gained an understanding of relevant accounting and reporting policies and procedures by reviewing and analyzing documentation and interviewing key FRB and Board personnel, (2) reviewed documentation supporting selected significant balance sheet amounts originating at the Dallas FRB, and (3) tested the effectiveness of certain internal controls in place at the Dallas FRB and the Federal Reserve Automation Services (FRAS) in Richmond, Virginia, and Dallas, Texas. We conducted our work primarily at the Federal Reserve Banks of Dallas and New York; the Dallas FRB’s branches in Houston, San Antonio, and El Paso; the two FRAS sites mentioned above; and the Board of Governors of the Federal Reserve System in Washington, D.C., between July 1994 and November 1995 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Chairman, Board of Governors of the Federal Reserve System. The Secretary of the Board provided us with written comments. These comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Our work at the Dallas FRB, its three branches, and the Federal Reserve Automation Services identified internal control issues that we considered to be significant enough to warrant management’s attention. Our findings were detailed in separate reports to officials of the Dallas FRB and FRAS, as applicable. In these reports, we provided suggestions for improvements and documented the many corrective actions either taken, underway, or planned by Dallas FRB and FRAS officials. The issues we identified at the Dallas FRB include weaknesses in controls over financial reporting, those aspects of automated systems that were controlled in Dallas, check processing, and Federal Reserve note inventories. For example, (1) reconciliations of general ledger accounts and activity were not always based on independent records, (2) the automated systems did not prohibit access by all terminated employees, (3) accounting adjustments related to check processing activity were not appropriately reviewed, and (4) inventory counts of Federal Reserve notes at some branches were not always properly conducted and documented. The management of the Dallas FRB has already taken action on some of our suggestions to resolve these issues. We also identified weaknesses in general controls over the automated systems maintained and operated by FRAS and used by the Dallas FRB. These weaknesses involved controls over access to sensitive information and the computer center, changes to system software, testing the disaster recovery plan, and the use of special privileges on automated tasks. For example, (1) access to job management software was not restricted to authorized individuals, (2) access to the FRAS computer center was inappropriately granted to contractor personnel, (3) FRAS lacked policies and procedures for testing and certifying software changes prior to implementation, and (4) FRAS had not tested the communication network linking the Federal Reserve System. FRAS officials agreed with our suggestions for improvement and, in most cases, initiated corrective actions prior to the conclusion of our work. The FRBs used different practices to track new note issuances than they used to track the notes they held in their vaults, resulting in inconsistent note accounting and reporting. Furthermore, various changes to the Federal Reserve Act, the notes’ interchangeable nature, and the way in which the FRBs meet their note collateral requirements appear to have made the tracking of note issuances by identifier unnecessary. When new notes are issued, the FRB whose identifying marking appears on the note records a liability for the note amount. Notes that are held in each FRB’s vault, regardless of identifier, reduce this liability to arrive at the reported amount of notes outstanding. Consequently, for each FRB, the reported amount of notes outstanding does not accurately reflect the actual amount of outstanding notes bearing that FRB’s identifier. Various changes to the act have also diminished the importance of these FRB identifiers. Originally, the act required an identifier on each note to help ensure that each FRB satisfied statutory gold reserve requirements for its notes in circulation. However, these gold reserve requirements have since been repealed. Additionally, in response to changes in the act, notes in the vault are no longer sorted and recorded by identifier. Historically, the identifiers facilitated the FRBs’ sorting of notes to comply with other note-related provisions. For example, the act originally prohibited the FRBs from paying out notes with other FRBs’ identifiers to customers. To comply with the act, each FRB sorted notes received from customers and returned notes to the other FRBs, as appropriate. The Congress eliminated these provisions to reduce costs and inefficiencies in the FRBs’ note-related operations. Additionally, under the act’s original provisions, the FRBs were required to return all excessively worn notes to the Comptroller of the Currency for destruction. Each FRB was credited with the amount of its notes to be destroyed. To further reduce costs, the Congress amended the act to modify these requirements. As a result, unfit notes may be destroyed at any FRB and the Board of Governors then apportions the note destructions among the FRBs. The act allows the Board to determine the method by which note destructions will be apportioned. Other factors affecting notes further diminish the importance of using identifiers to associate each note with a specific FRB for accounting and reporting purposes. As the nation’s currency, all notes are accepted at any FRB and are used interchangeably, regardless of their identifiers. In addition, the FRBs comply with the act’s collateral requirements by pledging each FRB’s eligible assets as collateral to secure the notes of all the FRBs. Individual FRB note liabilities are less meaningful than the combined note liability because of the notes’ cross-collateralization. Thus, continuing to use specific note identifiers to record note liabilities appears to be unnecessary. The FRBs have responded to the inefficiencies involved in using identifiers to track notes by automating the note accounting and reporting process. This has eliminated much of the effort involved in tracking notes manually. However, the inconsistency between how the issuances of new notes and the contents of the vault are accounted for and reported has continued. In November 1994, the Board contracted with an independent accounting firm to audit the asset accounts allocated among the FRBs for calendar years 1994 through 1999. The contract also requires audits of the combined financial statements of the FRBs as of December 31 for each of the years from 1995 through 1999. During these years, the financial statements of each individual FRB will also be audited once based on the schedule shown in table 1. Under this contract, the combined financial statements will be audited more frequently than the individual statements. This audit approach is appropriate in light of the needs of users of the combined financial statements. The FRBs operate under agreements which specify that assets pledged as collateral by each FRB for its outstanding notes are available to secure the notes of all the FRBs. Accordingly, the combined assets of the FRBs are used to determine whether the notes are adequately collateralized, thus making this combined presentation the most meaningful. These audits of the FRBs’ combined financial statements will give the Federal Reserve the opportunity to make audited financial statements publicly available. These annual audits enhance the credibility of reported information and conforms to the practices of the central banks of many other major industrialized nations. Although the Federal Reserve’s past annual reports have included the FRBs’ financial statements, these statements were not audited and lacked adequate disclosure of key information, such as significant accounting policies followed by the FRBs. In contrast, the central banks of France, Germany, the United Kingdom, and Canada issue publicly available annual reports that include audited financial statements and the independent auditors’ reports. Presently, there is no requirement that the combined financial statements of the FRBs be audited in accordance with generally accepted government auditing standards (GAGAS). Audits conducted under the contract will be performed in accordance with generally accepted auditing standards (GAAS). We believe that performing these audits under GAGAS would enhance the value of these audits. GAGAS audits incorporate the GAAS requirements, but go further by requiring additional tests of internal controls and compliance with laws and regulations and reports on these matters. The unique role of the FRBs and the nature of records underlying reported balances of Treasury securities and notes preclude full reliance on traditional auditing procedures. For example, confirming account balances with independent parties is an effective audit procedure to substantiate reported balances. However, this procedure cannot be performed for the FRBs’ Treasury security investments and Federal Reserve note liabilities. As part of functions it performs on behalf of Treasury, the New York FRB maintains the ownership records for Treasury securities, including those in the FRBs’ portfolio. However, the New York FRB also maintains the related accounting records for these securities. In contrast, Federal Reserve notes are held by parties independent of the FRBs. However, records of specific note holders cannot be maintained because notes continuously circulate throughout the country and the world. Consequently, the FRBs’ ownership of Treasury securities and the amount of notes outstanding cannot be independently confirmed. The FRBs retain supporting documentation for the cost of securities transactions for about 2 years. As a result, verifying the entire historical cost of securities that have been in the FRBs’ portfolio for extended periods is difficult. However, by retaining support and detailed records for the price paid for new security purchases, the FRBs could eventually support the entire cost of the securities portfolio when the current holdings either are sold or mature. The portion of recorded cost that cannot be readily supported relates to security premiums and discounts. The recorded amounts of premiums and discounts were not significant to the FRBs’ total Treasury security account balance as of December 31, 1994. However, auditing the completeness of these recorded amounts is complicated by the lack of supporting documentation and records. Certain Federal Reserve note characteristics affect related accounting and further complicate audit efforts. For example, notes do not mature or expire. In some countries, such as the United Kingdom and France, after a new currency issue is placed in circulation, the old issue is no longer valid for trade, and the liability for the old currency is removed after an appropriate period. However, the United States does not invalidate old note issues when a new note issue is placed in circulation. All notes issued are recorded as liabilities until returned to the FRBs. Additionally, many notes are held by collectors or are held in foreign countries and may never be returned to the FRBs. Destructibility, another note characteristic, also affects the note balance and complicates the FRB audits. Since notes were first issued, they have been destroyed by fires, wars, and other accidents and natural disasters beyond the FRBs’ control. The value of notes destroyed in this manner in a single year is unlikely to be large relative to the balance. However, the cumulative effect of these destructions and of other notes that may not be returned to the FRBs is unknown. The existence of these factors is not disclosed in the FRBs’ financial statements. We commend the Board for taking the step to contract for external, independent financial statement audits over the next 5 years. We believe that the Board’s current commitment to auditing the FRBs’ combined financial statements should be sustained and become a permanent part of the Board’s operating practices. Presenting audited, combined FRB financial statements that contain appropriate disclosures will enhance the credibility of the Federal Reserve’s annual report and will help meet the needs of financial statement users, including the Congress and the public. Institutionalizing such annual, external independent audits will also place the Federal Reserve System on a par with the central banks of other major industrialized nations with respect to financial reporting practices. In conducting these audits, the FRBs’ external auditors will need to address the audit challenges posed by the FRBs’ unique roles. Recording note liabilities based on bank identifiers is an inefficient use of FRB resources, and reporting this liability under the current approach does not serve a meaningful purpose. Discontinuing the practice of tracking and recording each FRB’s note liability based on note identifiers would increase efficiency and provide a consistent basis for the note liabilities reported by the FRBs. To bring about consistency and improve the efficiency of Federal Reserve note accounting and reporting procedures, we recommend that in conjunction with planning and implementing future changes to the automated systems used to account for and report notes, the Board of Governors of the Federal Reserve System consider incorporating changes in the function of these systems to allow FRBs to account for and report notes without regard to the identifiers printed on the notes; directing the FRBs to discontinue using specific FRB identifiers printed on notes as the basis for recording each FRB’s liability for Federal Reserve notes; stopping the tracking of shipments by FRB identifiers; directing each FRB to record its note liability based on the Federal Reserve notes it actually receives and holds without regard to FRB identifiers; and apportioning note destructions among FRBs on an appropriate basis without regard to FRB identifiers. To enhance the combined financial statements as a vehicle for informing Federal Reserve management, the Congress, and the public about the operations of Federal Reserve Banks, we recommend that the Board of Governors of the Federal Reserve System do the following: Adopt a policy to institutionalize annual, external independent audits of the FRBs’ combined financial statements as a routine operating procedure. These audits should be performed in accordance with GAGAS. Make the FRBs’ audited combined financial statements and independent auditor’s report publicly available upon issuance. For example, these documents could be included in the Federal Reserve System’s annual report. Include disclosures in the financial statements that (1) appropriately describe the significant accounting policies followed, such as the basis for the reported note liability and the treatment of the notes held in the vault, and (2) provide the information typically included in financial statements of other central banks and private sector financial institutions. Regarding our recommendations to bring about consistency and improve the efficiency of Federal Reserve note accounting and reporting procedures, the Board acknowledged in a letter dated January 11, 1996, that changes to the Federal Reserve Act and Federal Reserve policies have blurred the distinction among Federal Reserve notes with different unique identifiers. The Board acknowledged that the accounting process for note destructions offers an opportunity for further efficiencies to be gained in this area. The Board stated it will give consideration to the accounting method used for Federal Reserve notes as the accounting and tracking systems associated with the notes are reviewed for possible redesign. Our other recommendations were intended to enhance the Federal Reserve Banks’ combined financial statements as a vehicle for informing Federal Reserve management, the Congress, and the public about the operations of the Federal Reserve Banks, and we believe implementing them would enhance management’s accountability. The Board stated it will give careful consideration to our recommendations concerning the use of external auditors, presentation of financial statements, and the application of auditing standards. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Chairman of the House Committee on Banking and Financial Services; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9406 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Helen T. Desaulniers, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed several internal control issues at the Federal Reserve Bank (FRB) of Dallas and the Federal Reserve Automation Services' (FRAS) accounting procedures, focusing on: (1) Dallas FRB financial accounting and reporting and electronic data processing (EDP) control weaknesses; (2) the efficiency and consistency of Federal Reserve note accounting; and (3) auditing issues that need the attention of the Federal Reserve System's Board of Governors and its auditor. GAO found that: (1) at the Dallas FRB, its 3 branches, and FRAS, weaknesses exist in accounting records, asset accountability, and the use of automated systems; (2) Dallas FRB control weaknesses include failure to use independent records to verify and reconcile general ledger accounts and activity, limit access to FRB automated systems, review accounting adjustments related to check processing activity, and properly conduct and document Federal Reserve note inventories; (3) FRAS and Dallas FRB general EDP weaknesses include inadequate control over access to sensitive information, system software changes, disaster recovery plan testing, and the use of special privileges on automated tasks; (4) the Federal Reserve could improve the consistency and efficiency of its note accounting procedures by eliminating the use of the FRB identifier on each note for recording liabilities for notes in circulation; (5) the Board of Governors has contracted for annual independent external audits of the combined FRB asset accounts and financial statements over the next 5 years and one audit of each FRB during the same period to enhance the credibility of reported information; and (6) the auditor will face challenges identifying the ownership and original cost of U.S. Treasury securities, confirming amounts held by note holders, and the notes' unique characteristics of nonmaturity and destructibility.
Shortages of chemical and biological defense equipment are a long-standing problem. After the Persian Gulf Conflict, the Army changed its regulations in an attempt to ensure that early-deploying units would have sufficient equipment on hand upon deployment. This direction, contained in U.S. Forces Command Regulation 700-2, has not been universally implemented. Presently, neither the Army’s more than five active divisions composing the crisis response force nor the early-deploying Army reserve units we visited had complied with the new stocking level requirements. All had shortages of critical equipment; three of the more than five active divisions had 50 percent or greater shortages of protective suits, and shortages of other critical items were as high as 84 percent, depending on the unit and the item. This equipment is normally procured with operation and maintenance funds. These shortages occurred primarily because unit commanders consistently diverted operation and maintenance funds to meet what they considered higher priority requirements, such as base operating costs, quality-of-life considerations, and costs associated with other-than-war deployments such as those to Haiti and Somalia. Relative to the DOD budget, the cost of purchasing this protective equipment is low. Early-deploying active divisions in the continental United States could meet current stocking requirements for an additional cost of about $15 million. However, unless funds are specifically designated for chemical and biological defense equipment, we do not believe unit commanders will spend operation and maintenance funds for this purpose. The shortages of on-hand stock are exacerbated by inadequate installation warehouse space for equipment storage, poor inventorying and reordering techniques, shelf-life limitations, and difficulty in maintaining appropriate protective clothing sizes. The Army is presently considering decreasing units’ stocking requirements to the levels needed to support only each early-deploying division’s ready brigade and relying on depots to provide the additional equipment needed on a “just-in-time” basis before deployment. Other approaches under consideration by the Army include funding these equipment purchases through procurement accounts, and transferring responsibility for purchasing and storing this material on Army installations to the Defense Logistics Agency. New and improved equipment is needed to overcome some DOD defensive shortfalls, and DOD is having difficulty meeting all of its planned chemical and biological defense research goals. Efforts to improve the management of the materiel development and acquisition process have so far had limited results and will not attain their full effect until at least fiscal year 1998. In response to lessons learned in the Gulf War, Congress directed DOD to improve the coordination of chemical and biological doctrine, requirements, research, development, and acquisition among DOD and the military services. DOD has acted. During 1994 and 1995, it established the Joint Service Integration Group to prioritize chemical and biological defense research efforts and develop a modernization plan and the Joint Service Materiel Group to develop research, development, acquisition, and logistics support plans. The activities of these two groups are overseen by a single DOD office —the Assistant Secretary of Defense (Atomic Energy)(Chemical and Biological Matters). While these groups have begun to implement the congressional requirements of P.L. 103-160, progress has been slower than expected. At the time of our review, the Joint Service Integration Group expected to produce during 1996 its proposed (1) list of chemical and biological defense research priorities and (2) joint service modernization plan and operational strategy. The Joint Service Materiel Group expects to deliver its proposed plan to guide chemical and biological defense research, development, and acquisition in October 1996. Consolidated research and modernization plans are important for avoiding duplication among the services and otherwise achieving the most effective use of limited resources. It is unclear whether or when DOD will approve these plans. However, DOD officials acknowledged that it will be fiscal year 1998 at the earliest, about 5 years after the law was passed, before DOD can begin formal budgetary implementation of these plans. DOD officials told us progress by these groups has been adversely affected by personnel shortages and collateral duties assigned to the staff. DOD efforts to field specific equipment and conduct research to address chemical and biological defense deficiencies have produced mixed results. On the positive side, DOD began to field the Biological Integrated Detection System in January 1996 and expects to complete the initial purchase of 38 systems by September 1996. However, DOD has not succeeded in fielding other needed equipment and systems designed to address critical battlefield deficiencies identified during the Persian Gulf Conflict and earlier. For example, work initiated in 1978 to develop an Automatic Chemical Agent Alarm to provide visual, audio, and command-communicated warnings of chemical agents remains incomplete. Because of service decisions to fund other priorities, DOD has approved and acquired only 103 of the more than 200 FOX mobile reconnaissance systems originally planned. Of the 11 chemical and biological defense research goals listed in DOD’s 1995 Annual Report to the Congress, DOD met 5 by their expected completion date of January 1996. Some were not met. For example, a DOD attempt to develop a less corrosive and labor-intensive decontaminate solution is now not expected to be completed until 2002. Chemical and biological defense training at all levels has been a constant problem for many years. For example, in 1986, DOD studies found that its forces were inadequately trained to conduct critical tasks. It took 6 months during the Persian Gulf Conflict to prepare forces in theater to defend against chemical and biological agents. However, these skills declined again after this conflict. A 1993 Army Chemical School study found that a combined arms force of infantry, artillery, and support units would have extreme difficulty performing its mission and suffer needless casualties if forced to operate in a chemical or biological environment because the force was only marginally trained. Army studies conducted from 1991 to 1995 showed serious weaknesses at all levels in chemical and biological defense skills. Our analysis of Army readiness evaluations, trend data, and lessons learned reports from this period also showed individuals, units, and commanders alike had problems performing basic tasks critical to surviving and operating in a chemical or biological environment. Despite DOD efforts— such as doctrinal changes and command directives—designed to improve training in defense against chemical and biological warfare since the Gulf War, U.S. forces continue to experience serious weaknesses in (1) donning protective masks, (2) deploying detection equipment, (3) providing medical care, (4) planning for the evacuation of casualties, and (5) including chemical and biological issues in operational plans. The Marine Corps also continues to experience similar problems. In addition to individual service training problems, the ability of joint forces to operate in a contaminated environment is questionable. In 1995, only 10 percent of the joint exercises conducted by four major CINCs included training to defend against chemical and biological agents. None of this training included all 23 required chemical/biological training tasks, and the majority included less than half of these tasks. Furthermore, these CINCs plan to include chemical/biological training in only 15 percent of the joint exercises for 1996. This clearly demonstrates the lack of chemical and biological warfare training at the joint service level. There are two fundamental reasons for this. First, CINCs generally consider chemical and biological training and preparedness to be the responsibility of the individual services. Second, CINCs believe that chemical and biological defense training is a low priority relative to their other needs. We examined the ability of U.S. Army medical units that support early-deploying Army divisions to provide treatment to casualties in a chemically and biologically contaminated environment. We found that these units often lacked needed equipment and training. Medical units supporting early-deploying Army divisions we visited often lacked critical equipment needed to treat casualties in a chemically or biologically contaminated environment. For example, these units had only about 50 to 60 percent of their authorized patient treatment and decontamination kits. Some of the patient treatment kits on hand were missing critical items such as drugs used to treat casualties. Also, none of the units had any type of collective shelter to treat casualties in a contaminated environment. Army officials acknowledged that the inability to provide treatment in the forward area of battle would result in greater rates of injury and death. Old versions of collective shelters are unsuitable, unserviceable, and no longer in use; new shelters are not expected to be available until fiscal year 1997 at the earliest. Few Army physicians in the units we visited had received formal training on chemical and biological patient treatment beyond that provided by the Basic Medical Officer course. Further instruction on chemical and biological patient treatment is provided by the medical advanced course and the chemical and biological casualty management course. The latter course provides 6-1/2 days of classroom and field instruction needed to save lives, minimize injury, and conserve fighting strength in a chemical or biological warfare environment. During the Persian Gulf Conflict, this course was provided on an emergency basis to medical units already deployed to the Gulf. In 1995, 47 to 81 percent of Army physicians assigned to early-deploying units had not attended the medical advanced course, and 70 to 97 percent had not attended the casualty management course. Both the advanced and casualty management courses are optional, and according to Army medical officials, peacetime demands to provide care to service members and their dependents often prevented attendance. Also, the Army does not monitor those who attend the casualty management course, nor does it target this course toward those who need it most, such as those assigned to early-deploying units. DOD has inadequate stocks of vaccines for known threat agents, and an immunization policy established in 1993 that DOD so far has chosen not to implement. DOD’s program to vaccinate the force to protect them against biological agents will not be fully effective until these problems are resolved. Though DOD has identified which biological agents are critical threats and determined the amount of vaccines that should be stocked, we found that the amount of vaccines stocked remains insufficient to protect U.S. forces, as it was during the Persian Gulf Conflict. Problems also exist with regard to the vaccines available to DOD. Only a few biological agent vaccines have been approved by the Food and Drug Administration (FDA). Many remain in Investigational New Drug (IND) status. Although IND vaccines have long been safely administered to personnel working in DOD vaccine research and development programs, the FDA usually requires large-scale field trials in humans to demonstrate new drug safety and effectiveness before approval. DOD has not performed such field trials due to ethical and legal considerations. DOD officials said that they hoped to acquire a prime contractor during 1996 to subcontract vaccine production and do what is needed to obtain FDA approval of vaccines currently under investigation. Since the Persian Gulf Conflict, DOD has consolidated the funding and management of several biological warfare defense activities, including vaccines, under the new Joint Program Office for Biological Defense. In November 1993, DOD established a policy to stockpile sufficient biological agent vaccines and to inoculate service members assigned to high-threat areas or to early-deploying units before deployment. The JCS and other high-ranking DOD officials have not yet approved implementation of the immunization policy. The draft policy implementation plan is completed and is currently under review within DOD. However, this issue is highly controversial within DOD, and whether the implementation plan will be approved and carried out is unclear. Until that happens, service members in high-threat areas or designated for early deployment in a crisis will not be protected by approved vaccines against biological agents. The primary cause for the deficiencies in chemical and biological defense preparedness is a lack of emphasis up and down the line of command in DOD. In the final analysis, it is a matter of commanders’ military judgment to decide the relative significance of risks and to apply resources to counter those risks that the commander finds most compelling. DOD has decided to concentrate on other priorities and consequently to accept a greater risk regarding preparedness for operations on a contaminated battlefield. Chemical and biological defense funding allocations are being targeted by the Joint Staff and DOD for reduction in their attempts to fund other, higher priority programs. DOD allocates less than 1 percent of its total budget to chemical and biological defense. Annual funding for this area has decreased by over 30 percent in constant dollars since fiscal year 1992, from approximately $750 million in that fiscal year to $504 million in 1995. This reduction has occurred in spite of the current U.S. intelligence assessment that the chemical and biological warfare threat to U.S. forces is increasing and the importance of defending against the use of such agents in the changing worldwide military environment. Funding could decrease even further. On October 26, 1995, the Joint Requirements Oversight Council and the JCS Chairman proposed to the Office of the Secretary of Defense (OSD) a cut of $200 million for the next 5 years ($1 billion total) to the counterproliferation budget. The counterproliferation program element in the DOD budget includes funding for the joint nuclear, chemical, and biological defense program as well as vaccine procurement and other related counterproliferation support activities. If implemented, this cut would severely impair planned chemical and biological defense research and development efforts and reverse the progress that has been made in several areas, according to DOD sources. OSD supported only an $800 million cut over 5 years and sent the recommendation to the Secretary of Defense. On March 7, 1996, we were told that DOD was now considering a proposed funding reduction of $33 million. The battle staff chemical officer/chemical noncommissioned officers are a commander’s principal trainers and advisers on chemical and biological defense operations and equipment operations and maintenance. We found that chemical and biological officer staff positions are being eliminated and that when filled, staff officers occupying the position are frequently assigned collateral tasks that reduces the time available to manage chemical and biological defense activities. At U.S. Army Forces Command and U.S. Army III Corps headquarters, for example, chemical staff positions are being reduced. Also, DOD officials told us that the Joint Service Integration and Joint Service Materiel Groups have made limited progress largely because not enough personnel are assigned to them and collateral duties are assigned to the staff. We also found that chemical officers assigned to a CINC’s staff were frequently tasked with duties not related to chemical and biological defense. The lower emphasis given to chemical and biological matters is also demonstrated by weaknesses in the methods used to monitor their status. DOD’s current system for reporting readiness to the Joint Staff is the Status of Resources and Training System (SORTS). We found that the effectiveness of SORTS for evaluating unit chemical and biological defense readiness is limited largely because (1) it allows commanders to be subjective in their evaluations, (2) it allows commanders to determine for themselves which equipment is critical, and (3) reporting remains optional at the division level. We also found that after-action and lessons-learned reports and operational readiness evaluations of chemical and biological training are flawed. At the U.S. Army Reserve Command there is no chemical or biological defense position. Consequently, the U.S. Army Reserve Command does not effectively monitor the chemical and biological defense status of reserve forces. The priority given to chemical and biological defense varied widely. Most CINCs assign chemical and biological defense a lower priority than other threats. Even though the Joint Staff has tasked CINCs to ensure that their forces are trained in certain joint chemical and biological defense tasks, the CINCs we visited considered such training a service responsibility. Several DOD officials said that U.S. forces still face a generally limited, although increasing, threat of chemical and biological warfare. At Army corps, division, and unit levels, the priority given to this area depended on the commander’s opinion of its relative importance. At one early-deploying division we visited, the commander had an aggressive system for chemical and biological training, monitoring, and reporting. At another, the commander had made a conscious decision to emphasize other areas, such as other-than-war deployments and quality-of-life considerations. As this unit was increasingly being asked to conduct operations other than war, the commander’s emphasis on the chemical and biological warfare threat declined. Officials at all levels said training in chemical and biological preparedness was not emphasized because of higher priority taskings, low levels of interest by higher headquarters, difficulty working in cumbersome and uncomfortable protective clothing and masks, the time-consuming nature of the training, and a heavy reliance on post-mobilization training and preparation. We have no means to determine whether increased emphasis on chemical and biological warfare defense is warranted at the expense of other priorities. This is a matter of military judgment by DOD and of funding priorities by DOD and the Congress. We anticipate that in our report due in April 1996, we will recommend that the Secretary of Defense reevaluate the low priority given to chemical and biological defense and consider adopting a single manager concept for the execution of the chemical and biological program given the increasing chemical and biological warfare threat and the continuing weakness in the military’s defense capability. Further, we anticipate recommending that the Secretary consider elevating the office for current oversight to its own Assistant Secretary of Defense level, rather than leaving it in its present position as part of the Office of the Assistant Secretary for Atomic Energy. We may make other recommendations concerning opportunities to improve the effectiveness of existing DOD chemical and biological activities. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its assessment of U.S. forces' capability to fight and survive chemical and biological warfare. GAO noted that: (1) none of the Army's crisis-response or early-deployment units have complied with requirements for stocking equipment critical for fighting under chemical or biological warfare; (2) the Department of Defense (DOD) has established two joint service groups to prioritize chemical and biological defense research efforts, develop a modernization plan, and develop support plans; (3) although DOD has begun to field a biological agent detection system, it has not successfully fielded other needed equipment and systems to address critical battlefield deficiencies; (4) ground forces are inadequately trained to conduct critical tasks related to biological and chemical warfare, and there are serious weaknesses at all levels in chemical and biological defense skills; (5) medical units often lack the equipment and training needed to treat casualties resulting from chemical or biological contamination; (6) DOD has inadequate stocks of vaccines for known threat agents and has not implemented an immunization policy established in 1993; and (7) the primary cause for these deficiencies is a lack of emphasis along the DOD command chain, with DOD focusing its efforts and resources on other priorities.
HHS is the federal government's principal agency for protecting the health of Americans and provides essential human services, such as ensuring food and drug safety and assisting needy families. HHS disburses almost a quarter of all federal outlays and administers more grant dollars than all other federal agencies combined, providing more than $200 billion of over $350 billion in federal funds awarded to states and other entities in fiscal year 2002, the most recent year for which these data are available. For fiscal year 2004, HHS had a budget of $548 billion and over 66,000 employees. HHS comprises 11 agencies led by the Office of the Secretary covering a wide range of activities including conducting and sponsoring medical and social science research, guarding against the outbreak of infectious diseases, assuring the safety of food and drugs, and providing health care services and insurance. HHS is required by the CFO Act of 1990 to modernize its financial management systems and by the Federal Financial Management Improvement Act (FFMIA) of 1996 to have auditors—as part of an audit report on the agency’s annual financial statements—determine whether the agency’s financial management systems comply substantially with three requirements: (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger (SGL) at the transaction level. While HHS has received unqualified opinions on its financial statements at the consolidated departmental level since fiscal year 1999, the underlying financial systems that assist in the preparation of financial statements have not met all applicable requirements. For fiscal years 1997 through 2003, HHS auditors reported that the department’s systems did not substantially comply with federal financial management systems requirements, and for fiscal year 2003, they reported that the systems also lacked compliance with the SGL requirement. In describing the financial management problems in the fiscal year 2003 financial statement audit report, the HHS Inspector General (IG) stated that the department’s lack of an integrated financial system and internal control weaknesses made it difficult for HHS to prepare timely and reliable financial statements. The IG also noted that preparation of HHS financial statements required substantial “work arounds,” cumbersome reconciliations and consolidation processes, and significant adjustments to reconcile subsidiary records to reported balances on the financial statements. In June 2001, the Secretary of HHS directed the department to establish a unified accounting system that, when fully implemented, would replace five outdated accounting systems. HHS considers the UFMS program a business transformation effort with IT, business process improvement, and operations consolidation components. According to HHS, the program supports the Office of Management and Budget’s (OMB) requirements for each agency to implement and operate a single, integrated financial management system (required by OMB Circular No. A-127). HHS asserts that its approach will require it to institute a common set of business rules, data standards, and accounting policies and procedures, thereby significantly furthering the Secretary’s management objectives. Table 1 depicts the current accounting systems that will be replaced and the organizations currently served. In response to the Secretary’s direction, HHS began a project to improve its financial management operations. CMS and NIH had already initiated projects to replace their financial systems. Figure 1 illustrates the systems being replaced, the new configuration, and the approximate known implementation costs. As shown in figure 1, HHS plans to pursue a phased approach to achieving the Secretary’s vision. The first phase is to implement the system at CDC and, as of May 2004, CDC was expected to begin using the system for its operations starting in fiscal year 2005 (October 2004). FDA was expected to implement UFMS in May 2005, and the entities served by PSC were to be phased in from July 2005 through April 2007. After all of the individual component agency implementations have been completed, UFMS and HHS consolidated reporting will be deployed. This effort involves automating the department’s financial reporting capabilities and is expected to integrate the NIH Business and Research Support System (NBRSS) and CMS’ Healthcare Integrated General Ledger Accounting System (HIGLAS) into UFMS, which are scheduled to be fully implemented in 2006 and 2007, respectively. The focus of our review was on the system implementation efforts associated with the HHS entities not covered by the NBRSS and HIGLAS efforts. As shown in figure 1, the costs for this financial management system improvement effort can be broken down into four broad areas: NIH, CMS, all other HHS entities, and a system to consolidate the results of HHS’ financial management operations. HHS estimates that it will spend about $713 million as follows: $110 million for its NIH efforts (NBRSS), $393 million to implement HIGLAS, and $210 million for remaining HHS organizations. HHS has not yet developed an estimate of the costs associated with integrating these efforts into the HHS unified financial management system envisioned in Secretary Thompson’s June 2001 directive. HHS selected a commercial-off-the-shelf (COTS) product, Oracle U.S. Federal Financials software (certified by the Program Management Office of the Joint Financial Management Improvement Program (JFMIP) for federal agencies’ use), as the system it would use to design and implement UFMS. The department has hired two primary contractors to help implement UFMS. In November 2001, HHS awarded KPMG Consulting (now BearingPoint) a contract as system integrator for assistance in planning, designing, and implementing UFMS. As the systems integrator, BearingPoint is expected to provide team members, who are experienced in the enterprise resource planning (ERP) software and its installation, configuration, and customization, with expertise in software, hardware, business systems architecture, and business process and transformation. HHS selected Titan Corporation to act as the project’s independent verification and validation (IV&V) contractor, tasked with determining the programmatic, management, and technical status of the UFMS project and recommending actions to mitigate any identified risks to project success. When fully implemented, UFMS is expected to permit the consolidation of financial data across all HHS component agencies to support timely and reliable departmentwide financial reporting. In addition, it is intended to integrate financial information from the department’s administrative systems, including travel management systems, property systems, logistics systems, acquisition and contracting systems, and grant management systems. The department’s goals in the development and implementation of this integrated system are to achieve greater economies of scale; eliminate duplication; provide better service delivery; and help management monitor budgets, conduct operations, evaluate program performance, and make financial and programmatic decisions. Experience has shown that organizations that adopt and effectively implement best practices, referred to in systems development and implementation efforts as the disciplined processes, can reduce the risks associated with these projects to acceptable levels. Although HHS has adopted some of the best practices associated with managing projects such as UFMS, it has adopted other practices that significantly increase the risk to the project. Also, HHS has not yet effectively implemented several of the disciplined processes—requirements management, testing, project management and oversight, and risk management—necessary to reduce its risks to acceptable levels and has exposed the project to unnecessary risk that it will not achieve its cost, schedule, and performance objectives. The project has been able to obtain high-level sponsorship at HHS with senior financial management and HHS personnel routinely reviewing its progress. HHS officials maintain that the project is on schedule and that the functionality expected to be available for its first deployment, at CDC in October 2004, is well known and acceptable to its users. However, the IV&V contractor identified a number of serious deficiencies that are likely to affect HHS’ ability to successfully implement UFMS within its current budget and schedule while providing the functionality needed to achieve its goals. HHS management has been slow to take the recommended corrective actions necessary to address the findings and recommendations of its IV&V contractor. Further, it is not clear that the decision to proceed from one project milestone to the next is based on quantitative data that indicate tasks have been effectively completed. Rather, decisions to progress have been driven by the project’s schedule. With a focus on meeting schedule milestones and without quantitative data, HHS faces significant risk that UFMS will suffer the adverse impacts on its cost, schedule, and performance that have been experienced by projects with similar problems. Disciplined processes, which are fundamental to successful systems development and implementation efforts, have been shown to reduce to acceptable levels the risks associated with software development and acquisition. A disciplined software development and acquisition process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. Although there is no standard set of practices that will ever guarantee success, several organizations, such as SEI and IEEE, as well as individual experts, have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. The key to having a disciplined system development effort is to have disciplined processes in multiple areas, including project planning and management, requirements management, configuration management, risk management, quality assurance, and testing. Effective processes should be implemented in each of these areas throughout the project life cycle because change is constant. Effectively implementing the disciplined processes necessary to reduce project risks to acceptable levels is hard to achieve because a project must effectively implement several best practices, and inadequate implementation of any one may significantly reduce or even eliminate the positive benefits of the others. Acquiring and implementing a new financial management system requires a methodology that starts with a clear definition of the organization's mission and strategic objectives and ends with a system that meets specific information needs. We have seen many system efforts fail because agencies started with a general need, such as improving financial management, but did not define in precise terms (1) the specific problems they were trying to solve, (2) what their operational needs were, and (3) what specific information requirements flowed from these operational needs. Instead, they plunged into the acquisition and implementation process in the belief that these specifics would somehow be defined along the way. The typical result was that systems were delivered well past anticipated milestones; failed to perform as expected; and, accordingly, were overbudget because of required costly modifications. Figure 2 shows how organizations that do not effectively implement the disciplined processes lose the productive benefits of their efforts as a project continues through its development and implementation cycle. Although undisciplined projects show a great deal of productive work at the beginning of the project, the rework associated with defects begins to consume more and more resources. In response, processes are adopted in the hopes of managing what later turns out, in reality, to have been unproductive work. Generally, these processes are “too little, too late” and rework begins to consume more and more resources because sufficient foundations for building the systems were not done or not done adequately. Experience has shown that projects for which disciplined processes are not implemented at the beginning are forced to implement them later when it takes more time and they are less effective. As shown in figure 2, a major consumer of project resources in undisciplined efforts is rework (also known as thrashing). Rework occurs when the original work has defects or is no longer needed because of changes in project direction. Disciplined organizations focus their efforts on reducing the amount of rework because it is expensive. Fixing a defect during the testing phase costs anywhere from 10 to 100 times the cost of fixing it during the design or requirements phase. As shown in figure 2, projects that are unable to successfully address their rework will eventually only be spending their efforts on rework and the associated processes rather than on productive work. In other words, the project will continually find itself reworking items. Appendix II provides additional information on the disciplined processes. We found that HHS has not implemented effective disciplined processes in several key process areas that have been shown to form the foundation for project success or failure including requirements management, testing, project management and oversight, and risk management. Problems with HHS’ requirements management practices include the lack of (1) a concept of operations to guide the development of requirements, (2) traceability of a requirement from the concept of operations through testing to ensure requirements were adequately addressed in the system, and (3) specificity in the requirements to minimize confusion in the implementation. These problems with requirements have resulted in a questionable foundation for the systems’ testing process. In addition, HHS has provided an extremely limited amount of time to address defects identified from system testing, which reflects an optimism not supported by other HHS testing efforts, including those performed to test the conversion of data from CDC’s legacy system to UFMS. This type of short time frame generally indicates that a project is being driven to meet predetermined milestones in the project schedule. While adherence to schedule goals is generally desirable, if corners are cut and there is not adequate quantitative data to assess the risks to the project of not implementing disciplined processes in these areas, the risk of project rework or failure appreciably rises. Ineffective implementation of these processes exposes a project to the unnecessary risk that costly rework will be required, which in turn will adversely affect the project’s cost and schedule, and can adversely affect the ultimate performance of the system. An effective risk management process can be used by an agency to understand the risks that it is undertaking when it does not implement an effective requirements management process. In contrast, HHS has implemented risk management procedures that close risks before it is clear that mitigating actions were effective. HHS has agreed to change these procedures so that the actions needed to address risks remain visible and at the forefront. While the executive sponsor for the UFMS project and other senior HHS officials have demonstrated commitment to the project, effective project management and oversight are needed to identify and resolve problems as soon as possible, when it is the cheapest to fix them. For example, HHS officials have struggled to address problems identified by the IV&V contractor in a timely manner. Moreover, HHS officials lack the quantitative data or metrics to effectively oversee the project. An effective project management and oversight process uses such data to understand matters such as (1) whether the project plan needs to be adjusted and (2) oversight actions that may be needed to ensure that the project meets its stated goals and complies with agency guidance. Whereas, with ineffective project oversight, management can only respond to problems as they arise. We found significant problems in HHS’ requirements management process. (See appendix III for a more detailed discussion.) We found that HHS had not (1) developed a concept of operations that can be used to guide its requirements development process, (2) maintained traceability between the various requirements documents to ensure consistency, and (3) developed requirements that were unambiguous. Because of these weaknesses, HHS does not have reasonable assurance that the UFMS project is free of significant requirement defects that will cause significant rework. Requirements are the specifications that system developers and program managers use to design, develop, and acquire a system. They need to be unambiguous, consistent with one another, verifiable, and directly traceable to higher-level business or functional requirements. It is critical that requirements flow directly from the organization’s concept of operations, which describes how the organization’s day-to-day operations (1) are being carried out and (2) will be carried out to meet mission needs. Examples of problems noted in our review include the following. Requirements were not based on a concept of operations. HHS has prepared a number of documents that discuss various aspects of its vision for UFMS. However, these documents do not accomplish the principal objective associated with developing a concept of operations—specifying the high-level business processes that are expected to form the basis for requirements definition. One such document, issued April 30, 2004, discusses the use of shared service centers to perform financial management functions. This document was issued well after implementation efforts were under way and about 5 months before the expected deployment date of UFMS at CDC. As discussed in more detail in appendix III, the April 30 document does not clearly explain who will perform these functions, and where and how these functions will be performed. Requirements were not traceable. HHS developed a hierarchical approach to defining its requirements. HHS defined the high-level requirements that were used to identify the requirements that could not be satisfied by the COTS product. Once these high-level requirements were defined, a hierarchical requirements management process was developed which included (1) reviewing and updating the requirements through process design workshops,(2) establishing the initial baseline requirements, (3) performing a fit/gap analysis, (4) developing gap closure alternatives, and (5) creating the final baseline requirements. The key in using such a hierarchy is that each step of the process builds upon the previous step. However, this traceability was not maintained for the 74 requirements we reviewed. Therefore, HHS has little assurance that (1) requirements defined in the lower-level requirements documents are consistent with and adequately cover the higher-level requirements and (2) testing efforts based on lower-level requirements documents will adequately assess whether UFMS can meet the high- level requirements used to define the overall functionality expected from UFMS. Appendix III provides more details on problems we identified related to the traceability of requirements. Requirements were not always specific. Many requirements reviewed were not sufficiently specific to reduce requirements-related defects to acceptable levels. For example, one inadequately defined requirement stated that the system “shall track actual amounts and verify commitments and obligations against the budget as revised, consistent with each budget distribution level.” The “Define Budget Distributions” process area was expected to provide the additional specificity needed for this requirement. However, as of May 2004, this process document stated that the functionality was “To Be Determined.” Until HHS provides additional information concerning this requirement, it will not be able to determine whether the system can meet the requirement. Items that will need to be defined include the number of budget distribution levels that must be supported and what it means to verify the commitments and obligations against the revised budget. Appendix III includes more details on the problems related to the specificity of HHS’ requirements. HHS officials plan to use traditional testing approaches, including demonstrations and validations, to show UFMS’ compliance with HHS high-level requirements as well as the requirements contained in the various other requirements documents. However, the effectiveness of the testing process is directly related to the effectiveness of the requirements management process. HHS’ IV&V contractor reported that as of April 2004, the UFMS test program had not been adequately planned to provide the foundation for a comprehensive and coordinated process for validating that UFMS has the functionality to meet the stated requirements. For example, the test planning documents reviewed by the IV&V contractor did not have the detail typically found in test plans. As of May 2004, the information necessary for evaluating future testing efforts had not been developed for the 44 requirements that we reviewed. Because of the weaknesses noted in the requirements management process, HHS does not yet have a firm foundation on which to base an effective testing program. Complete and thorough testing is essential to provide reasonable assurance that new or modified systems will provide the capabilities in the requirements. Testing activities that can provide quantitative data on the ability of UFMS to meet HHS’ needs are scheduled late in the implementation cycle. For example, system testing on the capabilities for the CDC implementation was planned to start in August 2004 and to be completed in a 6-week time frame before the system is expected to become operational there. This leaves HHS with little time to address any defects identified during the system testing process and to ensure that the corrective actions taken to address the defects do not introduce new defects. Because HHS has allotted little time for system testing and defect correction, problems not corrected before system launch will in the worst case result in system failure, or will have to be addressed during operations, resulting in potentially costly and time-consuming rework. Testing is even more challenging for this system development because HHS had not fully developed its overall requirements traceability matrix before testing to determine whether testing will address the requirements. HHS is placing a great deal of reliance on system testing to provide reasonable assurance of the functionality included in UFMS. Also, with system testing scheduled for August, HHS had not, as of May 2004, established an effective management framework for testing. For example, HHS had not (1) clearly defined the roles and responsibilities of the developers and testers, (2) developed acceptance criteria, and (3) strictly controlled the testing environment. As the IV&V contractor noted, if testing is not properly controlled and documented, there is no assurance that the system has been adequately tested and will perform as expected. Accordingly, HHS will need to develop such documents prior to conducting testing, such as developing test cases and executing the actual tests. Given the issues associated with HHS’ requirements management process, even if HHS addresses these testing process weaknesses, evaluating UFMS based solely on testing will not ensure that CDC’s and HHS’ needs will be met. It is unlikely that the system testing phase will uncover all defects in the UFMS system. In fact, testing, based on well-defined requirements, performed through the system test phase, often catches less than 60 percent of a program’s defects. In HHS’ case, problems with its poorly defined requirements make creating test cases more challenging and increase the likelihood that the systems test phase will identify significant defects that are often identified by system testing. The remaining errors are found through other quality assurance practices, such as code inspections, or by end users after the software has been put into production. Thus, it will be important for HHS to implement a quality assurance program that is both rigorous and well-structured. The ability of HHS to effectively address its data conversion and system interface challenges will also be critical to the ultimate success of UFMS. In its white paper on financial system data conversion, JFMIP identified data conversion as one of the critical tasks necessary to successfully implement a new financial system. Moreover, JFMIP stated that data conversion is one of the most frequently underestimated tasks. JFMIP also noted that if data conversion is done right, the new system has a much greater opportunity for success. On the other hand, converting data incorrectly or entering unreliable data from a legacy system has lengthy and long-term repercussions. The adage “garbage in garbage out” best describes the adverse impact. For example, the National Aeronautics and Space Administration (NASA) cited data conversion problems as a major reason that it was unable to prepare auditable financial statements from its new financial management system. HHS officials had initially expected to perform only two data conversion testing efforts, but decided that two additional data conversion testing efforts were needed after identifying 77 issues during the first data conversion test. While there is no standard number of data conversion tests that are needed, the key to successfully converting data from a legacy system to a new system is that the data conversion test is successfully executed with minimal errors. In addition, system interfaces had not been fully developed as expected for the conference room pilots held in March and April 2004. Proper implementation of the interfaces between UFMS and the other systems it receives data from and sends data to is essential for the successful deployment of UFMS. HHS had originally expected to perform two data conversion testing efforts (commonly referred to as mock conversions) prior to the system being implemented at CDC. In discussions with HHS officials, we noted that other agencies have found that many more mock conversions are required, but HHS officials told us that the project schedule did not allow for many more conversion efforts. However, according to HHS, more than 8 months of preparatory activities were completed before beginning the first mock conversion. They also told us that at least some of these data-cleanup efforts had started about 3 years ago. As with other efforts on this project, the quantitative data necessary to determine whether HHS’ expectations were realistic, such as the number of issues identified during a mock conversion, were not produced until late in the implementation cycle. In May 2004, HHS performed the first of its two planned mock conversions. On the basis of the results of this effort, HHS has now decided that it will need to perform two additional mock conversions before the October 2004 implementation at CDC. As shown in the following examples of the problems found in the first mock conversion, data cleanup was not sufficient in at least some cases to support the data conversion efforts. Employer identification numbers (EIN) assigned to customers caused problems because adequate data cleanup efforts had not yet been performed. For example, multiple customers had the same EIN or an EIN on the invoice did not have a corresponding customer. In addition, over 1,300 vendors lacked the necessary banking information. Problems related to data quality and conversion logic were found in the conversions related to general ledger account balances. A primary cause of the problems was that the legacy system performed its closing activities by appropriation while UFMS does it by program. On the basis of a review of these problems by the project team, one of the team’s recommendations was that a substantial data cleanup effort in the legacy system be started to mitigate the problems identified in this mock conversion. Overall, HHS identified 77 issues that applied to 10 of the 11 business activities covered by this mock conversion. Table 2 shows the types of actions HHS identified as necessary to address these issues. At the conclusion of the first mock conversion, the project team believed that most of the major conversion issues had been identified and that subsequent data conversion efforts would only identify issues that required refinements to the solutions developed for the issues already identified. On the basis of the results of the first mock conversion, they also agreed to perform two additional mock conversions. We also noted similar problems in HHS’ efforts related to system interfaces. For example, one purpose of the March/April 2004 conference room pilot was to demonstrate several key system interfaces. However, a key feature of system interface efforts—error correction—was not available for demonstration since it had not yet been developed. At the conference room pilot, a user asked about how the error correction process would work for transactions that were not processed between two systems correctly and the user was told that the project team had not yet worked out how errors would be managed. Until HHS defines and implements this functionality, it will be unable to ensure that the processes being used for exchanging data between UFMS and more than 30 CDC systems ensures the necessary levels of data integrity. Properly implementing the interfaces will be critical to performing a realistic system test at CDC and ensuring UFMS will properly operate when in production. Also, HHS expects UFMS to interface with about 110 systems when it is fully implemented. In our view, a major value of a risk management system is the increased visibility over the scope of work and resources needed to address the risks. HHS officials have developed a risk assessment and mitigation strategy and have implemented a process for managing UFMS risks that meets many of the risk management best practices. For example, they cited a program to identify risks to the project, such as staffing shortages and training deficiencies, and have HHS management focus on those risks. Our review confirmed that HHS does maintain a risk database and that these risks are available for review and discussion during project oversight meetings. However, we noted problems with the implementation of the risk management system. HHS routinely closed its identified risks on the premise that they had been identified and were being addressed. As of March 2004, 13 of the 44 project risks identified by HHS were considered “closed,” even though it appeared that actions taken to close the risks were still ongoing. For example, HHS had identified data conversion as a risk because the conversion might be more complex, costly, and time consuming than previously estimated. However, this risk was closed in February 2003 because a data conversion strategy was in the project plan that UFMS officials considered as adequate to mitigate the risk. HHS officials characterized this practice as intended to streamline the number of risks for discussion at biweekly meetings. Project officials defended this approach under the premise that if the mitigating actions were not achieving their desired results, then the risk would be “reopened.” After we discussed this with HHS officials, they agreed to revise their procedures to include a resolution column with more information on why a risk was closed. This change should improve management’s ability to oversee the inventory of risks, their status, and the effectiveness of the mitigating strategies. According to HHS, the project has been able to obtain high-level sponsorship from senior financial management officials who routinely review its progress. This sponsorship has enabled the project to gain support from individuals critical to the implementation of UFMS at organizational units such as CDC. In addition, senior management officials have received periodic reports from a contractor hired to perform independent verification and validation that help identify issues needing management attention. Because of this strong support and oversight, HHS officials said they believed that the risks associated with the project have been reduced to acceptable levels and that the project can serve as a management model. While we agree that top management commitment and oversight together comprise one critical factor in determining a project’s success, they are not in themselves sufficient to provide reasonable assurance of the project’s success. As noted in our discussion of disciplined processes, the inadequate implementation of any one of the disciplined processes in systems development can significantly reduce or overcome the positive benefits of others. In this case, it is important to act promptly to address risks so as to minimize their impact. In this regard, in February 2003, HHS obtained the services of the current contractor to perform the IV&V function for the UFMS project. As of May 2004, according to the contractor, its staff has participated in hundreds of meetings at all levels within the project, provided written comments and recommendations on over 120 project documents, and produced 55 project status and assessment reports. Twice a month it produces a report that is sent directly to the Executive Sponsor of the UFMS project. These reports highlight the IV&V team’s view on the overall status of the UFMS project, including a discussion of any impacts or potential impacts to the project with respect to cost, schedule, and performance and a section on current IV&V concerns and associated recommendations. The IV&V contractor reported several project management and oversight weaknesses that increase the risks associated with this project that were not promptly addressed. Examples include the following. Personnel. Although the contractor hired by HHS to perform IV&V services identified the lack of personnel as a major risk factor in June 2003, it took HHS and its system integrator over 6 months to substantially address this weakness. In February 2004, the IV&V contractor reported this issue as closed. In closing this issue, the IV&V contractor noted that the availability of adequate resources was an ongoing concern, and the issue may be reopened at a later date. Related human capital issues are discussed in a separate section of this report. Critical path analysis. In August 2003, the IV&V contractor noted that an effective critical path analysis had not been developed. A critical path defines the series of tasks that must be finished on time for the entire project to finish on schedule. Each task on the critical path is a critical task. As of April 2004, this weakness had not been effectively addressed. Until HHS can develop an effective critical path analysis for this project, it does not have adequate assurance that it can understand the impact of various project events, such as delays in project deliverables. HHS’ critical path report shows planned start and finish dates for various activities, but does not show the actual progress so that the impact of schedule slips can be analyzed. The IV&V contractor recommended that critical path analysis and discussion become a more prominent feature of UFMS project management to monitor the resources assigned to activities that are on the critical path. Earned value management system. In August 2003, the IV&V contractor also noted that an effective earned value management system had not been implemented. Earned value management attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the program, earned value can alert program managers to potential problems early in the program. For example, if a task is expected to take 100 hours to complete and it is 50 percent complete, the earned value management system would compare the number of hours actually spent to complete the task to the number of hours expected for the amount of work performed. In this example, if the actual hours spent equaled 50 percent of the hours expected, the earned value would show that the project’s resources were consistent with the estimate. As of April 2004, this weakness had not been effectively addressed. Without an effective earned value management system, HHS has little assurance that it knows the status of the various project deliverables in the context of progress and associated cost. In other words, an effective earned value management system would be able to provide quantitative data on the status of a given project deliverable, such as a data conversion program. On the basis of this information, HHS management would be able to determine whether the progress of a task was within the expected parameters for completion. Management could then use this information to determine actions to take to mitigate risk and manage cost and schedule performance. The following additional significant issues were considered open by the IV&V contractor as of April 2004. Requirements management. The project had not produced an overall requirements traceability matrix that identified all the requirements and the manner in which each will be verified. In addition, HHS had not implemented a consistent approach to defining and maintaining a set of “testable” requirements. UFMS test program adequacy. The test program for UFMS had not been adequately defined and the test documentation reviewed to date lacks the detail typically found in test plans that are developed in accordance with industry standards and best practices. UFMS strategy documents. A number of key strategy documents that provide the foundation for system development and operations had not been completed as defined in the project schedule. These documents are used for guidance in developing documents for articulating the plans and procedures used to implement UFMS. Examples of the documents that were 2 or more months late include the UFMS Business Continuity Strategy, UFMS Lifecycle Test Strategy, Global Interface Strategy, and Global Conversion Strategy. In addition, the IV&V contractor has presented other issues, concerns, and recommendations in its reports. For example, a May 2004 report noted that the IV&V contractor had expressed some concerns on the adequacy of the project schedule and the status of some data conversion activities. Our review of the IV&V contractor’s concerns found that they are consistent with those that we identified in our review of UFMS. The ability to understand the impact of the weaknesses we and the IV&V contractor identified is limited because HHS has not effectively captured the types of quantitative data or metrics that can be used to assess the effectiveness of its management processes, such as identifying and quantifying any weaknesses in its requirements management process. This information is necessary to understand the risk being assumed and whether the UFMS project will provide the desired functionality. HHS does not have a metrics measurement process that allows it to fully understand (1) its capability to manage the entire UFMS effort; (2) how its process problems will affect the UFMS cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified. Without such a process, HHS management can only focus on the project schedule and whether activities have occurred as planned, not whether the activities achieved their objectives. Experience has shown that such an approach leads to rework instead of making real progress on the project. SEI has found that metrics identifying important events and trends are invaluable in guiding software organizations to informed decisions. Key SEI findings relating to metrics include the following. The success of any software organization depends on its ability to make predictions and commitments relative to the products it produces. Effective measurement processes help software groups succeed by enabling them to understand their capabilities so that they can develop achievable plans for producing and delivering products and services. Measurements enable people to detect trends and to anticipate problems, thus providing better control of costs, reducing risks, improving quality, and ensuring that business objectives are achieved. Defect tracking systems are one means of capturing quantitative data that can be used to evaluate project efforts. Although HHS has a system that captures the defects that have been reported, we found that the agency has not effectively implemented a process to ensure that defects are identified and reported as soon as they have been identified. For example, we noted in the March/April 2004 conference room pilot that one of the users identified a process weakness related to grant accounting as a “showstopper.” However, this weakness did not appear in the defect tracking system until about 1 month later. As a result, during this interval, the HHS defect tracking system did not accurately reflect the potential problems identified by the users, and HHS management was unable to determine (1) how well the system was working and (2) the amount of work necessary to correct the defects. Such information is critical when assessing a project’s status. According to HHS officials at of the end of our fieldwork, the UFMS project is on schedule. However, while the planned activities may have been performed, because there are not quantifiable criteria for assessing progress, it is unclear whether they were performed successfully or whether the activities have been accomplished substantively. For example, one major milestone was to conduct a conference room pilot in March/April 2004. HHS held the conference room pilot in March/April 2004, and so it considered that the milestone had been met. However, HHS did not define what constituted success for this event, such as the users identifying no significant defects in functionality. A discussion of the problems we identified with the March/April 2004 conference room pilot is included in appendix III and clearly demonstrates that the objective of this activity, to validate the prototype system and test interfaces, was not achieved. Therefore, by measuring progress based on the fact that this conference room pilot was held, HHS has little assurance that the project is in fact on schedule and can provide the desired functionality. This approach increases the risk that HHS will be surprised by a major malfunction at a critical juncture in the project, such as when it conducts system testing or attempts to implement the system at CDC. Good metrics would enable HHS to assess the risk of moving forward on UFMS with a much greater degree of certainty. HHS will be better able to proactively manage UFMS through disciplined processes as opposed to having to respond to problems as they arise. HHS’ inability to effectively implement the types of disciplined processes necessary to reduce risks to acceptable levels does not mean that the agency cannot put in place an effective process prior to the CDC implementation. However, HHS has little time to (1) address long-standing requirements management problems, (2) develop effective test cases from requirements that have not yet been defined at the level necessary to support effective testing efforts, and (3) develop and implement disciplined test management processes before it can begin its testing efforts. Furthermore, HHS will need to address its project management and oversight weaknesses so that officials can understand (1) the impact that the defects identified during system testing will have on the project’s schedule and (2) the corrective actions needed to reduce the risks associated with the problems identified. Without effectively implementing disciplined processes and the necessary metrics to understand the effectiveness of the processes that it has implemented, HHS is incurring unnecessary risks that the project will not meet its cost, schedule, and performance objectives. The kinds of problems we saw at HHS for the UFMS project have historically not boded well for successful system development at other federal agencies. In 1999 we reported on a system at the Department of the Interior’s Bureau of Indian Affairs (BIA) that had problems similar to those discussed in this report. As is the case at HHS, Interior’s deficiencies in requirements management and other disciplined processes meant that Interior had no assurance that its newly acquired system would meet its specific performance, security, and data management needs and that it would be delivered on time and on schedule. To reduce these risks, we recommended that Interior develop and implement an effective risk management plan and that Interior ensure that all project decisions were (1) based on objective data and demonstrated project accomplishments and (2) driven by events, not the schedule. In subsequent reviews we noted that, like HHS, Interior planned to use testing to demonstrate that the system could perform its intended functions. However, as we reported in September 2000, BIA did not follow sound practices in conducting its system and user acceptance tests for this system. Subsequently, in May 2004, the agency reported that only one function had been successfully implemented and that it was in the process of evaluating the capabilities and shortcomings of the system to determine whether any other components could be salvaged for interim use while it looked for a new system to provide the desired functionality. In reports on other agencies, we have also identified weaknesses in requirements management and testing that are similar to the problems we identified at HHS. Examples of problems that have resulted from undisciplined efforts include the following. In April 2003, we reported that NASA had not implemented an effective requirements management process and that these requirement management problems adversely affected its testing activities. We also noted that because of the testing inadequacies, significant defects later surfaced in the production system. In May 2004, we reported that NASA's new financial management system, which was fully deployed in June 2003 as called for in the project schedule, still did not address many of the agency's most challenging external reporting issues, such as external reporting problems related to property accounting and budgetary accounting. In May 2004, we reported that for two major Department of Defense (DOD) systems, the initial deployments for these systems did not operate as intended and, therefore, did not meet component-level needs. In large part, these operational problems were due to DOD not effectively implementing the disciplined processes that are necessary to manage the development and implementation of the systems in the areas of requirements management and testing. DOD program officials have acknowledged that the initial deployments of these systems experienced problems that could be attributed to requirements and testing. The problems experienced by these other agencies are illustrative of the types of problems that can result when disciplined processes are not properly implemented. Whether HHS will experience such problems cannot be known until the agency obtains the quantitative data necessary to indicate whether the system will meet its needs. Accordingly, HHS will need to ensure it adequately addresses the numerous weaknesses we and the IV&V contractor identified and has reduced the risk to an acceptable level before implementing UFMS at CDC. As we will be discussing in the next section, compounding the risk to UFMS from not properly implementing disciplined processes, is the fact that HHS is introducing UFMS into an environment with weaknesses in its departmentwide IT management practices. HHS has planned and developed UFMS using the agency’s existing IT investment management processes. However, we have reported—and HHS has acknowledged—weaknesses in IT investment management, enterprise architecture, and information security. Such weaknesses increase the risk that UFMS will not achieve planned results within the estimated budget and schedule. In addition to weaknesses in disciplined processes in the development of UFMS, weaknesses in the HHS’ IT management processes also increase the risks associated with UFMS. HHS is modifying its IT investment management policies, developing an enterprise architecture, and responding to security weaknesses with several ongoing activities, but these changes may not be implemented in time to compensate for the increased risks. IT investment management provides for the continuous identification, selection, control, life-cycle management, and evaluation of IT investments. The Clinger-Cohen Act of 1996 lays out specific aspects of the process that agency heads are to implement in order to maximize the value of the agency’s IT investments. In addition, OMB and GAO have issued guidance for agencies to use in implementing the Clinger-Cohen Act requirements for IT investment management. Our Information Technology Investment Management framework is a maturity model composed of five progressive stages of maturity that an agency can achieve in its IT investment management capabilities. These stages range from creating investment awareness to developing a complete investment portfolio to leveraging IT for strategic outcomes. The framework can be used both to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. OMB Circular No. A-130, which implements the Clinger-Cohen Act, requires agencies to use architectures. A well-defined enterprise architecture provides a clear and comprehensive picture of the structure of any enterprise by providing models that describe in business and technology terms how the entity operates today and how it intends to operate in the future. It also includes a plan for transitioning to this future state. Enterprise architectures are integral to managing large-scale programs such as UFMS. Managed properly, an enterprise architecture can clarify and help optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important management controls, architectures can greatly increase the chances that organizations’ operational and IT environments will be configured to optimize mission performance. To aid agencies in assessing and improving enterprise architecture management, we issued guidance establishing an enterprise architecture management maturity framework. That framework uses a five-stage maturity model outlining steps toward achieving a stable and mature process for managing the development, maintenance, and implementation of an enterprise architecture. The reliability of operating environments, computerized data, and the systems that process, maintain, and report these data is a major concern to federal entities, such as HHS, that have distributed networks that enable multiple computer processing units to communicate with each other. Such a platform increases the risk of unauthorized access to computer resources and possible data alteration. Effective departmentwide information security controls will help reduce the risk of loss due to errors, fraud and other illegal acts, disasters, or incidents that cause systems to be unavailable. Inadequate security and controls can adversely affect the reliability of the operating environments in which UFMS and its applications operate. Without effective general controls, application controls may be rendered ineffective by circumvention or modification. For example, a control designed to preclude users from entering unreasonably large dollar amounts in a payment processing system can be an effective application control, but this control cannot be relied on if general controls permit unauthorized program modifications to allow certain payments to be exempted from it. UFMS is at increased risk because of previously reported weaknesses in the process that HHS uses to select and control its IT investments. In January 2004, we reported that there were serious weaknesses in HHS IT investment management. Notably, HHS had not (1) established procedures for the development, documentation, and review of IT investments by its review boards or (2) documented policies and procedures for aligning and coordinating investment decision making among its investment management boards. In addition, HHS had not yet established selection criteria for project investments or a requirement that IT investments support work processes that have been simplified or redesigned. HHS is modifying several of its IT investment management policies, including its capital planning and investment control guidance and its governance policies; but as of May 12, 2004, these documents were not final or available for review. Until HHS addresses weaknesses in its selection or control processes, IT projects like UFMS will face an increased likelihood that the projects will not be completed on schedule and within estimated costs. In November 2003, we released a report noting the importance of leadership to agency progress on enterprise architecture efforts. We reported that federal agencies’ progress toward effective enterprise architecture management was limited: In a schedule of five stages leading to a highly effective enterprise architecture program, 97 percent of the agencies surveyed were still in Stage 1—creating enterprise architecture awareness. In that report, we noted that HHS had reached Stage 2— building the enterprise architecture management foundation—by successfully satisfying all elements of that stage of the maturity framework. In addition, HHS had successfully addressed three of six elements of the Stage 3 maturity level—developing architecture products. HHS has laid that foundation by (1) assigning enterprise architecture management roles and responsibilities and (2) establishing plans for developing enterprise architecture products and for measuring program progress and product quality. Progressing through the next stage would involve defining the scope of the architecture and developing products describing the organization in terms of business, performance, information/data, service/application, and technology. Once the scope is defined and products developed, Stage 3 organizations track and measure progress against plans; identify and address variances, as appropriate; and report on their progress. Although it has made progress, HHS has not yet established an enterprise architecture to guide and constrain its IT projects. In January 2004, HHS’ acting chief architect told us that the department continues to work on implementing an enterprise architecture to guide its decision making. He also noted that HHS plans to make UFMS a critical component of the enterprise architecture now under development. However, most of the planning and development of the UFMS IT investment has occurred without the guidance of an established enterprise architecture. Our experience with other federal agencies has shown that projects developed without the constraints of an established enterprise architecture are at risk of being duplicative, not well integrated, unnecessarily costly to maintain and interface, and ineffective in supporting missions. HHS has recognized the need to improve information security throughout the department, including in key operating divisions, and has various initiatives under way; however, it has not yet fully implemented the key elements of a comprehensive security management program. Unresolved general control weaknesses at headquarters and in HHS’ operating divisions include almost all areas of information system controls described in our Federal Information System Controls Audit Manual (FISCAM). These weaknesses are in entitywide security, access controls, system software, application software, and service continuity and they are significant and pervasive. According to a recent IG report, the underlying cause for most of the weaknesses was that the department did not have an effective management structure in place to ensure that sensitive data and critical operations received adequate attention and that appropriate security controls were implemented to protect them. HHS has not sufficiently controlled network access, appropriately limited mainframe access, or fully implemented a comprehensive program to monitor access. Weaknesses in other information security controls, including physical security, further increased the risk to HHS’ information systems. As a result, sensitive data—including information related to the privacy of U.S. citizens, payroll and financial transactions, proprietary information, and mission-critical data—were at increased risk of unauthorized disclosure, modification, or loss, possibly without being detected. Overall, the IG concluded that the weaknesses left the department vulnerable to unauthorized access to and disclosure of sensitive information, malicious changes that could interrupt data processing or destroy data files, improper payments, or disruption of critical operations. Extensive information security planning for UFMS was based on requirements and applicable guidance set forth in the Federal Information Security Management Act, OMB Circular No. A-130 Appendix III (Security of Federal Automated Information Resources), National Institute of Standards and Technology guidance, and our FISCAM. However, that planning was done without complete information from the department and operating divisions. HHS has not conducted a comprehensive, departmentwide assessment of information security general controls. Further, information security general controls at four operating divisions have not been recently assessed. UFMS officials told us they did not know which operating divisions had conducted or contracted for a review of their individual information security environments. Without departmentwide and operating-division-specific assessments, HHS increases its risk that information security general control weaknesses will not be identified and therefore will not be subject to departmentwide resolution or mitigation by UFMS controls. According to HHS officials, some operating divisions that have been assessed recently have not provided UFMS with current information on the status of the outstanding weaknesses in their operating environments. UFMS officials told us that they do not have assurance of the reliability of the control environment of these operating divisions. Without information on control weaknesses in the operating divisions, UFMS management has not been in a position to develop mitigating controls that could compensate for departmentwide weaknesses. As a result, UFMS planning for security cannot provide reasonable assurance that the system is protected from loss due to errors, fraud and other illegal acts, disasters, and incidents that cause systems to be unavailable. Serious understaffing and incomplete workforce planning have plagued the UFMS project. Human capital management for the UFMS project includes organizational planning, staff acquisition, and team development. It is essential that an agency take the necessary steps to ensure that it has the human capital capacity to design, implement, and operate a financial management system. However, the UFMS project has experienced staff shortages as high as 40 percent of the federal positions that HHS believed were needed to implement UFMS. Although the staff shortage has been alleviated to a great extent, the impact of such a significant shortfall lingers. Further, HHS has not yet fully developed key workforce planning tools, such as the CDC skills gap analysis, to help transform its workforce so that it can effectively use UFMS. It is important that agencies incorporate strategic workforce planning by (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining an organization’s total workforce to meet the needs of the future. This incorporates a range of activities from identifying and defining roles and responsibilities to identifying team members to developing individual competencies that enhance performance. Human capital planning should be considered for all stages of the system implementation. According to JFMIP’s Building the Work Force Capacity to Successfully Implement Financial Systems, the roles needed on an implementation team are consistent across financial system implementation projects and include a project manager, systems integrator, functional experts, information technology manager, and IT analysts. Many of these project roles require the dedication of full-time staff for one or more of the project’s phases. HHS has identified the lack of resources as a risk to the project and acquired the staff to fill some of the roles needed for a systems implementation project. The project has a project manager, systems integrator, and some functional experts. However, on the basis of our review of the HHS Organization and Staffing Plan and the most recent program management office organization chart, many positions were not filled as planned. For example, as reported in the IV&V contractor’s September 2003 report, some key personnel filled multiple positions and their actual available time was inadequate to perform the allocated tasks— commonly referred to as staff being overallocated on the project. As a result, some personnel were overworked, which according to the IV&V contractor, could lead to poor morale. The UFMS organization chart also showed that the UFMS project team was understaffed and that several integral positions were vacant or filled with part-time detailees. As of January 2004, 19 of the 47 UFMS positions in the UFMS Program Management Office identified as needed for the UFMS project were not filled. The vacant positions included key positions such as the enterprise architect, purchasing, testing, and configuration management leads. While HHS and the systems integrator have taken measures to acquire additional human resources for the implementation of UFMS, scarce resources could significantly jeopardize the project’s success and have led to several key deliverables being significantly behind schedule, as discussed in the section on disciplined processes. Without adequate resources to staff the project, the project schedule could be negatively affected, project controls and accountability could be diminished, and the successful implementation of UFMS could be compromised. Strategic workforce planning is essential for achieving the mission and goals of the UFMS project. As we have reported, there are five key principles that strategic workforce planning should address: Involve top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan. Determine the critical skills and competencies that will be needed to achieve current and future programmatic results. Develop strategies that are tailored to address gaps in the number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. Build the capability needed to address administrative, educational, and other requirements important to support workforce planning strategies. Monitor and evaluate the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results. HHS has taken first steps to address three of the five key principles identified in our report on strategic workforce planning. To address the first key principle, HHS’ top management first communicated the agency’s goal to implement a unified financial management system in June 2001 and has continued to communicate the agency’s vision. HHS has developed an Organizational Change Management Plan and, according to the UFMS project’s Statement of Work, HHS, in undertaking UFMS, will seek to ensure that sufficient efforts are made to address communications, human resources, and training requirements. To meet the second principle of identifying the needed skills and competencies, HHS developed a Global Organization Impact Analysis in March 2003 and subsequently prepared an analysis for CDC that identified workforce and training implications associated with the major changes that will occur in its financial management business processes. However, more work remains. Although a Global/CDC Pilot Competency Report was prepared that focuses on preparing and equipping the workforce to function effectively in the new environment, none of the other operating divisions scheduled to implement UFMS had prepared a competency report as of May 2004. To effectively address the third principle of developing strategies to address the gaps in human capital, HHS must first identify the skills and competencies needed. HHS has plans to conduct a skills gap analysis on a site-specific basis. However, as of May 2004, the CDC skills gap analysis had not been completed. CDC officials maintain that they intend to wait until after the system is implemented to assess the changes in individuals’ workloads and make decisions on staffing changes. In addition, HHS is currently developing a global Workforce Transition Strategy, which the other operating divisions will use as a model in developing their own strategies. According to HHS officials, HHS has also prepared a global training strategy. Training plans are to be developed on a site-specific basis using the global strategy as a model. Although CDC has a tentative schedule for planned training, as of May 2004 the CDC training plan was not complete. As we have previously reported, having staff with the appropriate skills is key to achieving financial management improvements, and managing an organization’s employees is essential to achieving results. HHS already faces challenges in implementing its financial management system due to the lack of adequate resources. By not identifying staff with the requisite skills to implement such a system and by not identifying gaps in needed skills and filling them, HHS has reduced its chances of successfully implementing and operating UFMS. HHS has not followed key disciplined processes necessary to reduce the risks associated with implementing UFMS to acceptable levels. These problems are similar to those encountered by other agencies that have found themselves under strong pressure to skip steps in their haste to get systems up and running and produce results. If HHS continues on this path, it runs a higher risk than necessary of finding, as many others have already discovered, that the system may be more costly to operate, take more time and effort to perform needed functions, be more disruptive to the work of the agency, and may not achieve the intended improvement. Ideally, HHS should not continue with its current approach for UFMS. However, if HHS decides for operational reasons to continue its plan to deploy UFMS at CDC in October 2004, then as a precursor to deployment at CDC, there are several key steps that must be taken to mitigate the significant risk related to this deployment. To begin, HHS must determine the system capabilities that are necessary for the CDC deployment and identify the relevant requirements related to those capabilities. The associated requirements will have to be unambiguous and adequately express how the system will work, be traceable from their origin through implementation, and be sufficiently tested to confirm that the system meets those functional needs. Validating data conversion efforts and systems interfaces will also be critical to the successful launch of UFMS. HHS will need to ensure that its desire to meet the October 2004 initial deployment of UFMS is driven by successful completion of at least these key events based on quantitative data rather than the schedule. HHS should not deploy UFMS at CDC until these critical steps are complete. Before proceeding further with the UFMS implementation beyond CDC, HHS should pause to assess whether an appropriate foundation is in place so that UFMS will achieve its ultimate goals of a unified accounting system that institutes common business rules, data standards, and accounting policies and procedures. From our perspective, HHS does not have a fully developed view of how UFMS will operate because it moved forward with the project before ensuring that certain key elements, such as a concept of operations and an enterprise architecture, were completed. Without assurances that it is moving ahead with a solid foundation and a fully developed and strongly administered plan for bringing the entire UFMS project under the disciplined processes of requirements management, testing, risk management, and the use of quantitative measures to manage the project, HHS risks not achieving its goal of a common accounting system that produces data for management decision making and financial reporting and risks perpetuating its long-standing accounting system weaknesses with substantial workarounds to address any needed capabilities that have not been built into the system. Because we have recently issued reports providing HHS with recommendations to address weaknesses in IT investment management processes, we are not making additional recommendations in this report related to those two disciplines other than to reiterate the importance of taking action on our prior recommendations. It will be important that HHS continue with its ongoing initiatives to strengthen these two areas. Also, HHS has not fully secured its information systems security environment to offer an adequate basis for incorporating adequate security features into UFMS as it is being developed. Finally, addressing human capital and staffing shortages that have also increased risks related to UFMS is paramount to achieving the agency’s objectives for this project. To help reduce risks associated with deployment of UFMS at CDC to acceptable levels, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Budget, Technology, and Finance to require that the UFMS program staff take the following nine actions: Determine the system capabilities that are necessary for the CDC deployment. Identify the relevant requirements related to the desired system capabilities for the CDC deployment. Clarify, where necessary, any requirements to ensure they (1) fully describe the capability to be delivered, (2) include the source of the requirement, and (3) are unambiguously stated to allow for quantitative evaluation. Maintain traceability of the CDC-related requirements from their origin through implementation. Use a testing process that employs effective requirements to obtain the quantitative measures necessary to understand the assumed risks. Validate that data conversion efforts produce reliable data for use in UFMS. Verify that systems interfaces function properly so that data exchanges between systems are adequate to satisfy system needs. Measure progress based on quantitative data rather than the occurrence of events. If these actions are not completed, delay deployment of UFMS at CDC. Before proceeding with further implementation of UFMS after deployment at CDC, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Budget, Technology, and Finance to require that the UFMS program staff take the following 14 actions: Develop and effectively implement a plan on how HHS will implement the disciplined processes necessary to reduce the risks associated with this effort to acceptable levels. This plan should include the processes, such as those identified by SEI and IEEE, that will be implemented and the resources, such as staffing and funding, needed to implement the necessary processes. Develop a concept of operations in accordance with recognized industry standards such as those promulgated by IEEE. The concept of operations should apply to all HHS entities that will be required to use UFMS. This concept of operations should contain a high-level description of the operations that must be performed, who must perform them, and where and how the operations will be carried out, and be consistent with the current vision for the HHS information system enterprise architecture. Implement a requirements management process that develops requirements that are consistent with the concept of operations and calls for each of the resulting requirements to have the attributes associated with good requirements: (1) fully describing the functionality to be delivered, (2) including the source of the requirement, and (3) stating the requirement in unambiguous terms that allows for quantitative evaluation. Maintain traceability of requirements among the various implementation phases from origin through implementation. Confirm that requirements are effectively used for (1) determining the functionality that will be available in UFMS at a given location, (2) implementing the required functionality, (3) supporting an effective testing process to evaluate whether UFMS is ready for deployment, (4) validating that data conversion efforts produce reliable data for use in UFMS, and (5) verifying that systems interfaces function properly so that data exchanges between systems are adequate to satisfy each system’s needs. Develop and implement a testing process that uses adequate requirements as a basis for testing a given system function. Formalize risk management procedures to consider that (1) all risks currently applicable to the UFMS project are identified and (2) a risk is only closed after the risk is no longer applicable rather than once management has developed a mitigation strategy. Develop and implement a program that will identify the quantitative metrics needed to evaluate project performance and risks. Use quantitative measures to assess progress and compliance with disciplined processes. To help ensure that HHS reduces risks in the agencywide IT environment associated with its implementation of UFMS, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Budget, Technology, and Finance to require that the following seven actions are taken by the IT program management staff, as appropriate: Conduct assessments of operating divisions’ information security general controls that have not been recently assessed. Establish a comprehensive program to monitor access to the network, including controls over access to the mainframe and the network. Verify that the UFMS project management staff has all applicable information needed to fully ensure a comprehensive security management program for UFMS. Specifically, this would include identifying and assessing the reported concerns for all HHS entities regarding key general control areas of the information security management process: (1) entitywide security planning, (2) access controls, (3) system software controls, (4) segregation of duties, and (5) application development and change controls. To help improve the human capital initiatives associated with the UFMS project, we recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Budget, Technology, and Finance to require that the following four actions are taken by the UFMS program management staff: Assess the key positions needed for effective project management and confirm that those positions have the human resources needed. If needed, solicit the assistance of the Assistant Secretary for Budget, Technology, and Finance to fill key positions in a timely manner. Finalize critical human capital strategies and plans related to UFMS (1) skills gap analysis, (2) workforce transition strategy, and (3) training plans. In written comments on a draft of this report, HHS described the actions it had taken to date to develop UFMS, including some actions related to our recommendations, which if effectively implemented, should reduce project risk. HHS disagreed with our conclusion that a lack of disciplined processes is placing the UFMS program at risk, stating that its processes have been clear and rigorously executed. HHS characterized the risk in its approach as the result not of a lack of disciplined process but of an aggressive project schedule. HHS stated that it made a decision early in the program to phase in the deployment of the system to obtain what it referred to as incremental benefits, and said that a core set of requirements will be available for the October 2004 release at CDC. HHS added that if a system functional capability becomes high risk for the pilot implementation at CDC, it could be deferred to a subsequent release without affecting the overall implementation. HHS did not provide examples of the functional capabilities that could be deferred under such a scenario, but we understand that at least some functionality associated with grant accounting being deployed at CDC is less than that originally envisioned when we performed our review—less than 6 months before the scheduled CDC implementation date. HHS stated that it had reached every major milestone to date within the planned timeframes and budget for almost 3 years while managing to mitigate the cost, schedule, and technical risks. The agency considers this is a testament to UFMS management disciplines, notwithstanding known needed improvements. From our perspective, this project demonstrates the classic symptoms of a schedule-driven effort for which key processes have been omitted or shortcutted, thereby unnecessarily increasing risk. This is a multiyear project, and it is important that the project adhere to disciplined processes that represent best practices. We have no problem whatsoever with a phased approach and view it as a sound decision for this project. There is no doubt that a phased approach can help reduce risks. However, we do not agree that a phased approach adequately mitigates risk in a project of this magnitude, given the other problems we identified. As discussed in our report and highlighted in the following sections that further evaluate HHS’ comments on our draft report, we identified a number of problems with HHS’ methodology, including problems in requirements management, testing, project management and oversight, and IT management, that are at the heart of our concern. Also, we are not saying that HHS is not following any disciplined processes, and in this report we have recognized certain HHS actions that we believe represent best practices that reduce risk. We are saying that HHS has not reduced its risk to an acceptable level because a number of key disciplined processes were not yet in place or were not effectively implemented. We focused our 34 recommendations on tangible actions that HHS can take to adequately mitigate risk. Risk on a project such as this can never be eliminated, but risk can be much better managed than what we observed for this project. With respect to HHS’ comment that all milestones have been met, as we discussed in detail in this report, we caution that because HHS has insufficient quantifiable criteria for assessing the quality of its progress and the impact of identified defects, it does not have the information it needs to determine whether the milestones have been substantively accomplished and the nature and extent of resources needed to resolve remaining defects. A best practice is having quantitative metrics and a disciplined process for continually measuring and monitoring results. We stand firmly behind our findings that HHS had not reduced project risk to an acceptable level because it had not adequately adhered to disciplined processes called for in its stated implementation methodology. We are somewhat encouraged by the planned actions outlined in HHS’ comment letter and the fact that it has now decided to delay initial implementation by at least 2 weeks to address known problems and has indicated it may delay the initial implementation further as needed. Only time will tell how well this project turns out, as the initial implementation at CDC represents just the first phase. Our hope is that the disciplined processes discussed in our report and addressed in our recommendations will be followed and that risks of a project of this magnitude and importance will be reduced to an acceptable level. If the past is prologue, taking the time to adhere to disciplined processes will pay dividends in the long term. HHS stated that the underlying premise of our report is that there is one correct way to perform an implementation for a project such as UFMS and that this methodology, commonly referred to as the waterfall methodology, is inappropriate for a COTS-based system. Our report does not call for the use of this or any other specific methodology. Instead, we have emphasized the importance of following disciplined processes in the development and implementation of large and complex information management systems, including financial management systems such as UFMS. As we have reiterated throughout this report, we view disciplined processes as the key to successfully carrying out a system development and implementation program whatever the methodology. In the case of HHS’ COTS-based system development program, we did not question the methodology, but have concerns about HHS’ ability to successfully implement its methodology. For example, as explained in our report and reiterated in HHS’ comments, before a COTS software package is selected for implementation, requirements need to be more flexible and less specific than custom-developed software because no off-the-shelf product is likely to satisfy all of the detailed requirements for a large, complex organization such as HHS. Once the product is selected, however, a disciplined approach to COTS implementation demands that requirements be defined at a level of specificity that allows the software to be configured to fit the system under development and to be implemented to meet the organization’s needs. In discussing the HHS methodology, our report is consistent with how HHS described its methodology in its comments. As we noted in the report, the methodology selected by HHS requires (1) reviewing and updating the requirements through process design workshops, (2) establishing the initial baseline requirements, (3) performing a fit/gap analysis, (4) developing gap closure alternatives, and (5) creating the final baseline requirements. However, as noted in our report, HHS was unable to successfully implement its methodology for the majority of the requirements we reviewed. For example, one inadequately defined requirement was linked to the budget distributions process. However, this process, which should of provided additional specificity to understand how the system needed to be configured, stated that the process was “To Be Determined.” In its comments, HHS stated that in July 2002 it had developed a “target business model” that is equivalent to a concept of operations for guiding its development efforts. The document HHS referenced, which we reviewed during our audit, along with several other requirement-related documents HHS had provided, did not have all the elements associated with a concept of operations document as defined by IEEE. For example, the document did not address the modes of operation; user classes and how they should interact; operational policies and constraints; costs of systems operations; performance characteristics, such as speed, throughput, volume, or frequency; quality attributes, such as availability, reliability, supportability, and expandability; and provisions for safety, security, and privacy. The document does not address a number of other critical issues associated with the project such as the use of shared services. We also noted that some HHS officials who had reviewed this document stated that it did not resolve a number of issues that needed to be addressed. For example, HHS reviewers raised questions about who was responsible for several core functions. When we performed our review, these types of questions remained unanswered, although HHS said in its comments on our draft report that it is taking steps to address these concerns and has now made certain decisions regarding shared services. In addition, HHS’ comment letter stated that it has developed a requirements database that could be used to track the requirements and that its requirements management process used two broad categories - Program Management Office of JFMIP requirements and agency-specific requirements. HHS also stated that the requirements process has fully defined and documented the expected behavior of UFMS and that the agency-specific requirements it had identified had been developed in accordance with industry best practices. HHS noted that it has also developed a requirements traceability verification matrix since our review. The result, according to HHS, has been a requirements management process that provides fully traceable requirements that are fully tested by the implementation team. Developing and effectively implementing the kinds of processes described in HHS’ comments are positive steps that would reduce the risks associated with requirements related defects. However, since these key processes, which were called for in our report and during meetings held with HHS during our review, were developed and implemented after our work was complete, we are unable to determine whether HHS has yet fully addressed the weaknesses we observed. As noted in our report, we found numerous requirements that did not contain the necessary specificity to support a good testing program. We also note that the HHS comments refer to these processes being used for “testable” requirements but do not provide information on how many of the 2,130 requirements contained in its requirements database were considered testable and, therefore, subject to this improved process. While HHS stated in its comment letter that it has implemented a more disciplined system testing process, its comments also raised concerns about the thoroughness of the testing. HHS noted that it has selected an application certified by the Program Management Office of JFMIP and that “80% of the requirements have been met out of the box functionality.” Accordingly, HHS stated that it has, by design, tested these requirements with less rigor than the agency specific requirements. As noted in HHS’ comments, its requirements management database contains 2,130 requirements that include requirements issued by the Program Management Office of JFMIP. However, according to the Program Management Office of JFMIP, its testing efforts encompass about 331 requirements, or only about 16 percent of HHS’ stated requirements. Compounding this limitation, while the Program Management Office of JFMIP test results can be helpful, as the Program Management Office of JFMIP has consistently made it clear to agencies, these tests are not intended to take the place of agency-level tests. The Program Management Office of JFMIP tests are in a controlled environment that is not intended to represent the operating environment of a specific agency. As the Project Management Office of JFMIP points out on its Web site, agencies need to (1) test the installed configured system to ensure continued compliance with the governmentwide core requirements and any agency-specific requirements, (2) assess the suitability of an application for the agency’s operating environment, and (3) assess the COTS computing performance in the agency’s environment for response time and transaction throughput capacity. For example, addressing this last point regarding transaction throughput capacity has proven problematic to some agencies that implemented a COTS package. The system could have properly processed a type of transaction, which is what the test requires in order to be certified. However, the system may require a number of separate processing steps to accomplish this task. Those steps may be acceptable at an agency that has a relatively low volume of this type of transaction, but may prove problematic for an agency with a high volume of this type of transaction. As noted in the HHS comments, it had not yet developed the test scripts and other documentation that would have enabled us to assess the adequacy of its system testing activities at the time of our review. Therefore, we cannot conclude on whether its system testing activities will have a reasonable assurance of detecting the majority of the defects. HHS noted that it had conducted preliminary testing, referred to as conference room pilots, in August 2003 and in March and April 2004 and that these activities were attended by finance, business, and program staff members from across HHS, who will be the ultimate users of the new system. As noted in our report, our review of the conference room pilot conducted in March and April 2004 found significant weaknesses in the processes being used. This was the last conference pilot scheduled before the pilot deployment at CDC. We found that some of the stated requirements in a given conference room pilot test script were not tested and defects identified were not promptly recorded. This is consistent with observations made by HHS’ IV&V contractor on the August 2003 conference room pilots. Furthermore, we observed that when users asked about needed functionality, they were told that the functionality would be developed later. Therefore, we are encouraged by the statement in HHS’ comment letter that it will implement a disciplined system testing process. In our report, we also noted that the system testing activities were scheduled late in the first phase of the UFMS implementation process, leaving little time for HHS to address any defects identified during system testing and to ensure that the corrective actions taken to address the defects do not introduce new defects. HHS agreed that system testing would ideally come earlier in the process and noted that although the testing process is being performed late due to an aggressive time schedule, it believed, based on its level of scrutiny, its testing plan will identify the majority of the defects in the system. We view this as adding to project risk. However, we are encouraged that in its comments on our draft report, HHS said it was analyzing system integration test results prior to deploying the system at CDC, and that this assessment may result in revising the current software release strategy. In its comments, HHS stated that its combined use of software tools, including TeamPlay from Primavera, provides management with information for monitoring the project’s critical path and the earned value of completed work and that this action was taken in October 2003 after an August 2003 report from its IV&V contractor. As with other process areas, the key to reducing risks to acceptable levels is not only the tool that is used but, more importantly, the effective implementation of that tool. In other words, simply selecting an industry standard practice or tool does not guarantee success. As noted in a May 2004 IV&V report, as of April 2004, the IV&V contractor was still raising concerns about HHS’ ability to perform critical path and earned value analysis. HHS acknowledged in its comments on our draft report that it continues to work on improving the information provided in the critical path reports and is executing a plan to implement the remainder of the IV&V suggestions. As we discussed previously in this report, without an effective critical path analysis and an earned value management system, HHS does not have adequate assurance that it can understand the impact of various project events, such as delays in project deliverables, and that it knows the status of the various project deliverables in the context of progress and associated cost. We continue to believe that management needs this information to determine actions to take to mitigate risk and manage cost and schedule performance. HHS also stated that all of the needed improvements in its project execution were identified and documented prior to and during our review by its IV&V contractor and that improvements continue to be implemented. Our report clearly identifies areas of mutual concern by us and the IV&V contractor as well as areas where our work uncovered additional issues. Regardless of who identified the problems, we remain concerned that HHS has been slow to act upon the weaknesses identified by the IV&V contractor and has not yet clearly identified actions planned to address our recommendations. Our report provides examples where it has taken HHS months to address the findings made by its IV&V contractor. Regarding quantitative measures, HHS agreed that quantitative measures are crucial to UFMS success and stated that it has struck an adequate balance between the number of measures used to assess UFMS progress and the effort and costs required to develop and maintain the measures. HHS described several measures related to its defect-tracking processes that are associated with its system testing efforts. We agree with HHS that the measures listed in its comment letter are critical to assessing system stability and readiness, but HHS’ comments did not indicate whether it is also capturing metrics on items that can help it understand the risks associated with the processes it is implementing, such as with its requirements management process. For example, HHS stated that system testing had not identified any requirements problems, which indicated the requirements were defined thoroughly. However, system testing is normally not designed to capture requirements problems since, as noted in HHS’ comment letter, testing is structured to determine whether the system is meeting requirements that have been documented. Therefore, it is not clear whether HHS has fully developed a metric process that will address its needs throughout the phased deployments. Regarding human capital, HHS said that it faces its share of challenges in obtaining full-time federal staff due to the temporary nature of an implementation project and the agency’s objective to staff a highly competent program team and not a permanent federal bureaucracy. We recognize that HHS and the systems integrator it has under contract to assist with the project have taken measures to acquire additional staff for the implementation of UFMS. We also recognize the challenge in finding people with the needed skills. Our concern is that the UFMS project has experienced staff shortages as high as 40 percent of the federal positions that HHS believed were needed to implement UFMS. This shortage of staff resources led to several key deliverables being significantly behind schedule. Also, while HHS said that CDC has the vast majority of its required positions filled, we found that many of the positions for this operating division were filled with staff from the program management office for the project, which affects the work that should be done to manage and oversee the project. As stated in our report, without adequate staff resources, the project schedule can be negatively affected, project controls and accountability can be diminished, and the successful implementation of UFMS may be compromised. With respect to IT management, including investment management, enterprise architecture, and information security, HHS elaborated on further activities taken to address weaknesses that we had pointed out in our draft report. In its comments, HHS referenced a Web site that provides its IT investment policy dated January 2001, which we had already reviewed and which agency officials stated was in the process of being updated. In January 2004, we recommended 10 actions the department should take to improve its IT investment management process. One action called for HHS to revise the department’s IT investment management policy to include (1) how this process relates to other agency processes, (2) an identification of external and environmental factors, (3) a description of the relationship between the process and the department’s enterprise architecture, and (4) the use of independent verification and validation reviews, when appropriate. HHS concurred with our recommendations. Further, although HHS’ comments indicated that we made a recommendation related to enterprise architecture, as we stated in our conclusions, we did not make recommendations about enterprise architecture in this report. We agree with HHS that progress has been made in its information security management. However, HHS did not address the potential impact that outstanding departmentwide information security controls weaknesses could have on the reliability and integrity of the new financial management system. HHS will need to ensure effective information security controls departmentwide for UFMS operations. In its response to a draft of this report, HHS stated that the timing of our review of the UFMS was not optimal and required significant staff time for meetings and preparation, document requests, and communications. In HHS’ opinion, GAO involvement was in itself a significant contributor to project schedule risk. In our view, we conducted this engagement in a professional, constructive manner in which we worked proactively with HHS to provide timely observations on the implementation of UFMS. The timing of our review was aimed at providing input early in the process so that HHS can act to address weaknesses and reduce the risk of implementing a system that does not meet needs and expectations and requires costly rework and work-arounds to operate. We have found in our reviews of other agencies’ system implementation efforts that effective implementation of disciplined processes can reduce risks that have an adverse impact on the cost, timeliness, and performance of a project. Through early recognition and resolution of the weaknesses identified, HHS can optimize its opportunities to reduce the risks that UFMS will not fully meet one or more of its cost, schedule, and performance objectives. Further, in performing our review, we made every effort to reduce inconvenience to HHS. For example, HHS asked us and we agreed to postpone our initial meetings with HHS staff until after the completion of HHS’ fiscal year 2003 financial statement audit. We also followed HHS’ protocols in scheduling meetings and requested documentation that should have been readily available, at this stage of the UFMS. HHS’ adoption of several of our recommendations evidences the added value of our review and implementation of all 34 of our recommendations will add even greater value to the project. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs, and other interested congressional committees. We are also sending copies to the Secretary of Health and Human Services and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Sally E. Thompson, Director, Financial Management and Assurance, who may be reached at (202) 512-9450 or by e-mail at thompsons@gao.gov, or Keith A. Rhodes, Chief Technologist, Applied Research and Methods, who may be reached at (202) 512-6412 or by e-mail at rhodesk@gao.gov. Staff contacts and other key contributors to this report are listed in appendix V. Our review of the Department of Health and Human Services’ (HHS) ongoing effort to develop and implement a unified accounting system focused on one of the three concurrent but separate projects: the ongoing implementation of the Unified Financial Management System (UFMS) at the Centers for Disease Control and Prevention (CDC), the Food and Drug Administration and HHS’ Program Support Center (PSC). This project will be carried out in a phased approach. HHS is currently implementing UFMS at CDC, and it is scheduled to go live in October 2004. The other two projects are the Centers for Medicare and Medicaid Services’ (CMS) implementation of the Healthcare Integrated General Ledger Accounting System to replace the Financial Accounting Control System, and the National Institutes of Health’s (NIH) implementation of the NIH Business and Research Support System to replace the Central Accounting System. To assess HHS’ implementation of disciplined processes, we reviewed industry standards and best practices from the Institute of Electrical and Electronics Engineers (IEEE), Software Engineering Institute (SEI), Project Management Institute, Joint Financial Management Improvement Program (JFMIP), GAO executive guides, and prior GAO reports. We reviewed and analyzed UFMS planning documents related to project management, testing, data conversion, requirements management, risk management, and configuration management. We also reviewed minutes from key meetings, such as the Information Technology Investment Review Board meetings, Risk Management meetings, and Planning and Development Committee meetings. In addition, we reviewed reports issued by the independent verification and validation (IV&V) contractor and interviewed the systems integrator to clarify the status of issues discussed in the reports. To assess whether HHS had established and implemented disciplined processes related to requirements management, we reviewed strategy and planning documents, including its Financial Shared Services Study Concept of Operation, dated April 30, 2004; reviewed HHS’ procedures for defining its requirements management framework and compared these procedures to its current practices; reviewed guidance published by IEEE and SEI and publications by experts to determine the attributes that should be used in developing good requirements and selected over 70 requirements and performed an in-depth review and analysis to determine whether they could be traced between the various process documents; attended the second conference room pilot (the session held in Rockville, Maryland) to evaluate whether the test scripts demonstrated the functionality of the listed requirements; and reviewed IV&V contractor reports to obtain its perspective on HHS’ requirements management processes. To assess the risk management process, we reviewed the 44 risks documented in the PMOnline risk management tool to determine the current status of the risk and to assess the risk mitigation plan. We interviewed agency officials to obtain explanations for the status of the risks. We analyzed the project schedule and IV&V status reports to assess the probability of HHS meeting its projected completion dates for development, implementation, and testing. To assess information technology (IT) management practices, we reviewed prior GAO reports on governmentwide investment management and enterprise architecture. We also reviewed and analyzed relevant IT policies and plans and HHS documentation on the IT investment management processes. To assess information security practices, we relied on prior years’ audit work performed in this area. We reviewed pertinent HHS security policies and procedures, and reviewed HHS’ efforts to minimize potential and actual risks and exposures. To determine whether HHS had the human resources capacity to successfully design, implement, and operate the financial management system, we reviewed JFMIP’s Core Competencies for Project Managers Implementing Financial Systems in the Federal Government, Building the Work Force Capacity to Successfully Implement Financial Systems, and Core Competencies in Financial Management for Information Technology Personnel Implementing Financial Systems in the Federal Government and prior GAO reports related to strategic workforce planning. We analyzed the UFMS program management office organization chart and obtained related information on project staffing. We also interviewed HHS officials and the IV&V contractor to discuss staffing resource issues. For these areas, we interviewed HHS, UFMS, IV&V, and systems integrator officials to discuss the status of the project and their roles in the project. On April 26, 2004, and May 12, 2004, we briefed HHS management on our findings so that action could be taken to reduce risks associated with the UFMS project. We performed our work at HHS headquarters in Washington, D.C.; at the UFMS site in Rockville, Maryland; and at CDC offices in Atlanta, Georgia. Our work was performed from September 2003 through May 2004 in accordance with U.S. generally accepted government auditing standards. We did not review the prior implementation of Oracle at NIH or the ongoing implementation of Oracle at CMS. We requested comments on a draft of this report from the Secretary of Health and Human Services or his designee. Written comments from the Department of Health and Human Services are reprinted in appendix IV and evaluated in the “Agency Comments and Our Evaluation” section. Disciplined processes have been shown to reduce the risks associated with software development and acquisition efforts to acceptable levels and are fundamental to successful systems acquisition. A disciplined software development and acquisition process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. Although a standard set of practices that will guarantee success does not exist, several organizations, such as SEI and IEEE, and individual experts have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. The key to having a disciplined system development effort is to have disciplined processes in multiple areas, including requirements management, testing, project planning and oversight, and risk management. Requirements are the specifications that system developers and program managers use to design, develop, and acquire a system. They need to be carefully defined, consistent with one another, verifiable, and directly traceable to higher-level business or functional requirements. It is critical that they flow directly from the organization’s concept of operations (how the organization’s day-to-day operations are or will be carried out to meet mission needs). According to IEEE, a leader in defining the best practices for such efforts, good requirements have several characteristics, including the following: The requirements fully describe the software functionality to be delivered. Functionality is a defined objective or characteristic action of a system or component. For example, for grants management, a key functionality includes knowing (1) the funds obligated to a grantee for a specific purpose, (2) the cost incurred by the grantee, and (3) the funds provided in accordance with federal accounting standards. The requirements are stated in clear terms that allow for quantitative evaluation. Specifically, all readers of a requirement should arrive at a single, consistent interpretation of it. Traceability among various requirement documents is maintained. Requirements for projects can be expressed at various levels depending on user needs. They range from agencywide business requirements to increasingly detailed functional requirements that eventually permit the software project managers and other technicians to design and build the required functionality in the new system. Adequate traceability ensures that a requirement in one document is consistent with and linked to applicable requirements in another document. The requirements document contains all of the requirements identified by the customer, as well as those needed for the definition of the system. Studies have shown that problems associated with requirements definition are key factors in software projects that do not meet their cost, schedule, and performance goals. Examples include the following: A 1988 study found that getting a requirement right in the first place costs 50 to 200 times less than waiting until after the system is implemented to get it right. A 1994 survey of more than 8,000 software projects found that the top three reasons that projects were delivered late, over budget, and with less functionality than desired all had to do with requirements management. A 1994 study found that the average project experiences about a 25 percent increase in requirements over its lifetime, which translates into at least a 25 percent increase in the schedule. A 1997 study noted that between 40 and 60 percent of all defects found in a software project could be traced back to errors made during the requirements development stage. Testing is the process of executing a program with the intent of finding errors. Because requirements provide the foundation for system testing, specificity and traceability defects in system requirements preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that substantial defects will not be detected until after the system is implemented. As shown in figure 3, there is a direct relationship between requirements and testing. Although the actual testing occurs late in the development cycle, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements derived from the concept of operations and functional requirements stages can identify errors, omissions, and ambiguities long before any code is written or a system is configured. Disciplined organizations also recognize that planning the testing activities in coordination with the requirements development process has major benefits. Although well-defined requirements are critical for implementing a successful testing program, disciplined testing efforts for projects such as UFMS have several characteristics, which include the following: Testers who assume that the program has errors. Such testers are likely to find a greater percentage of the defects present in the system. This is commonly called the “testing mindset.” Test plans and scripts that clearly define what the expected results should be when the test case is properly executed and the program does not have a defect that would be detected by the test case. This helps to ensure that defects are not mistakenly accepted. Processes that ensure test results are thoroughly inspected. Test cases that include exposing the system to invalid and unexpected conditions as well as the valid and expected conditions. This is commonly referred to as boundary condition testing. Testing processes that determine if a program has unwanted side effects. For example, a process should update the proper records correctly but should not delete other records. Systematic gathering, tracking, and analyzing statistics on the defects identified during testing. Although these processes may appear obvious, they are often overlooked in testing activities. Project planning is the process used to establish reasonable plans for carrying out and managing the software project. This includes (1) developing estimates of the resources needed for the work to be performed, (2) establishing the necessary commitments, and (3) defining the plan necessary to perform the work. Effective planning is needed to identify and resolve problems as soon as possible, when it is the cheapest to fix them. According to one author, the average project spends about 80 percent of its time on unplanned rework—fixing mistakes that were made earlier in the project. Recognizing that mistakes will be made in a project is an important part of planning. According to this author, successful system development activities are designed so that the project team makes a carefully planned series of small mistakes to avoid making large, unplanned mistakes. For example, spending the time to adequately analyze three design alternatives before selecting one results in time spent analyzing two alternatives that were not selected. However, discovering that a design is inadequate after development can result in code that must be rewritten two times, at a cost greater than analyzing the three alternatives in the first place. This same author notes that a good rule of thumb is that each hour a developer spends reviewing project requirements and architecture saves 3 to 10 hours later in the project. Project oversight can also be a valuable contributor to successful projects. Agency management can perform oversight functions, such as project reviews and participating in key meetings, to help ensure that the project will meet the agency needs. Management can also use IV&V reviews to provide it with assessments of the project’s software deliverables and processes. Although independent of the developer, IV&V is an integral part of the overall development program and helps management mitigate risks. Risk and opportunity are inextricably related. Although developing software is a risky endeavor, risk management processes should be used to manage the project’s risks to acceptable levels by taking the actions necessary to mitigate the adverse effects of significant risks before they threaten the project’s success. If a project does not effectively manage its risks, then the risks will manage the project. Risk management is a set of activities for identifying, analyzing, planning, tracking, and controlling risks. Risk management starts with identifying the risks before they can become problems. If this step is not performed well, then the entire risk management process may become a useless exercise since one cannot manage something that one does not know anything about. As with the other disciplined processes, risk management is designed to eliminate the effects of undesirable events at the earliest possible stage to avoid the costly consequences of rework. After the risks are identified, they need to be analyzed so that they can be better understood and decisions can be made about what actions, if any, will be taken to address them. Basically, this step includes activities such as evaluating the impact on the project if the risk does occur, determining the probability of the event occurring, and prioritizing the risk against the other risks. Once the risks are analyzed, a risk management plan is developed that outlines the information known about the risks and the actions, if any, which will be taken to mitigate those risks. Risk monitoring is a continuous process because both the risks and actions planned to address identified risks need to be monitored, to ensure that the risks are being properly controlled and that new risks are identified as early as possible. If the actions envisioned in the plan are not adequate, then additional controls are needed to correct the deficiencies identified. HHS has not implemented an effective requirements management process to reduce requirements-related defects to acceptable levels or to support an effective testing process. In reviewing HHS’ requirements management process, we found (1) the requirements were not based on a concept of operations that should provide the framework for the requirements development process, (2) traceability was not maintained between various requirements documents, and (3) the requirements contained in the documents do not provide the necessary specificity. Because of these weaknesses, HHS does not have reasonable assurance that it has reduced its requirements-related defects to acceptable levels. Furthermore, the requirements management problems we noted also prevent HHS from developing an effective testing process until they are adequately addressed. Although HHS has performed some functions that are similar to testing, commonly referred to as conference room pilots, to help it determine whether the system will meet its needs, these efforts have not provided the quantitative data needed to provide reasonable assurance that the system can provide the needed capability. Therefore, HHS is depending on system testing, which is not expected to start until less than 2 months before system implementation, to provide it with the quantitative data needed to determine whether the system will meet its needs. Requirements for UFMS were not based on a concept of operations. The concept of operations—which contains a high-level description of the operations that must be performed, who must perform them, and where and how the operations will be carried out—provides the foundation on which requirements definitions and the rest of the systems planning process are built. Normally, a concept of operations is one of the first documents to be produced during a disciplined development effort. According to the IEEE Standards, a concept of operations is a user- oriented document that describes the characteristics of a proposed system from the users’ viewpoint. Its development is a particularly critical step at HHS because of the organizational complexity of its financial management activities and the estimated 110 other systems HHS expects to interface with UFMS. In response to our requests for a UFMS concept of operations, HHS officials provided its Financial Shared Services Study Concept of Operation, dated April 30, 2004, that studied several approaches for HHS management to consider for implementing shared services. While making a decision on whether to operate in a shared services environment is important because it will dictate such items as hardware, network, and software needs, this study lacks many of the essential elements needed for a concept of operations document that can be used to fully inform users about the business processes that will be used by UFMS. Without this information, the document cannot serve as the foundation for HHS’ requirements management processes. HHS management has stated that it plans to establish centers of excellence for UFMS and has identified four functions as candidates to begin shared services. These functions are UFMS operations and maintenance, customer service (call center), vendor payments, and e-travel. HHS management also decided that establishing a center of excellence for operations and maintenance should begin right away. Basically, this center of excellence will perform such UFMS operations and maintenance functions as maintaining the data tables in the UFMS database, managing various periodic closings, and performing various user maintenance functions as well as some security functions. While HHS officials advised us that they had selected PSC to operate the operations and maintenance center of excellence, there is limited time to establish the center before UFMS’ planned deployment date at CDC. In addition, HHS has still not identified (1) who will operate the other centers of excellence and the location(s) performing these functions and (2) how these functions will be performed. To address these open issues, HHS has asked several HHS operating divisions to submit business plans for operating a center of excellence. We also analyzed various other strategy and planning documents that are expected to be used in developing UFMS. Like the Financial Shared Services Study Concept of Operation, none of these other documents individually or in their totality addressed all of the key elements of a concept of operations. For example, operational policies and constraints have not been addressed. Moreover, profiles of user classes describing each class of user, including responsibilities, education, background, skill level, activities, and modes of interaction with the current system, have not been developed. In fact, as of May 2004, HHS has been unable to get agreement on all the standard processes that it will use. For example, when HHS attempted to develop a standard way of recording grant-related information, the project team members were unable to get agreement between the various operating divisions on how to develop crosscutting codes that would have to be maintained at the departmental level. Part of the process of developing a concept of operations for an organization includes describing how its day-to-day operations will be carried out to meet mission needs. The project team tasked with developing and implementing a UFMS common accounting system attempted to develop standardized processes that would be used for the UFMS project. They held meetings with several different operating divisions to reach agreement on how the processes should be structured. Unfortunately, an agreement between the various parties could not be reached, and the decision on how these processes would be defined was deferred for further discussion for at least 6 months. Since standardized processes could not be agreed upon at the outset, additional requirements definition and validation activities must be conducted later in the development cycle when they are more costly to implement. In addition, process modifications will affect all users, including those who have been trained in and perform financial management functions using the original process. These users may have to undergo additional training and modify their existing understanding of how the system performs a given function. Because HHS has not developed a complete concept of operations, requirements definition efforts have not had the benefit of documentation that fully depicts how HHS’ financial system will operate, and so HHS cannot ensure that all requirements for the system’s operations have been defined. Without well-defined requirements, HHS cannot be certain that the level of functionality that will be provided by UFMS is understood by the project team and users and that the resulting system will provide the expected functionality. HHS has adopted an approach to requirements development that its officials believe is suited to the acquisition and development of commercial off-the-shelf software (COTS). HHS officials have stated that the requirements management process that we reviewed was adopted based on their belief that for COTS development, they do not need to fully define the UFMS requirements because UFMS is not a traditional system development effort. Therefore, they adopted the following approach. Define high-level requirements that could be used to guide the selection and implementation of the system. Understand how the COTS-based system meets the high-level requirements defined for UFMS and how HHS must (1) modify its existing processes to match the COTS processes or (2) identify the areas or gaps requiring custom solutions. Develop specific requirements for the areas that require custom solutions and document those requirements in the requirements repository tool as derived requirements. HHS used a hierarchical approach to develop the specific requirements from the high-level requirements used to acquire the system. These high- level requirements and the related supporting documentation were expected to help HHS identify the requirements that could not be satisfied by the COTS product. This approach includes using the high-level requirements to (1) update the requirements through process design workshops, which generated business processes; (2) establish initial baseline requirements; (3) perform a fit/gap analysis; (4) develop gap closure alternatives; and (5) create the final baseline requirements. The key advantage in using such a hierarchy is that each step of the process builds upon the previous one. However, unidentified defects in one step migrate to the subsequent steps where they are more costly to fix and thereby increase the risk that the project will experience adverse effects on its schedule, cost, and performance objectives. HHS recognized that the high-level requirements associated with the COTS processes are “by definition, insufficient to adequately define the required behavior of the COTS based system.” However, HHS has stated that UFMS will be able to demonstrate compliance with these requirements as well as the requirements derived from high-level requirements associated with its custom development through traditional testing approaches including demonstrations and validations. We agree with HHS’ position that requirement statements for COTS products need to be more flexible and less specific before a product is selected because of the low probability that any off-the-shelf product will satisfy the detailed requirements of an organization like HHS. As HHS has noted, COTS products are designed to meet the needs of the marketplace not a specific organization. However, once the product is selected, requirements must be defined at a level that allows the software to be configured to fit the system under development and implemented to meet the organization’s needs. As noted elsewhere, on the basis of the requirements we reviewed, HHS had not accomplished this objective. Furthermore, we identified numerous instances in which each documented requirement used to design and test the system was not traceable forward to the business processes and therefore could not build upon the next step in moving through the hierarchy. This is commonly referred to as traceability. Furthermore, the requirements (1) lacked the specific information necessary to understand the required functionality that was to be provided and (2) did not describe how to determine quantitatively, through testing or other analysis, whether the systems would meet HHS’ needs. One example showing that HHS did not adequately define a requirement and maintain traceability through the various documents is an HHS requirement regarding general ledger entries that was inadequately defined. The high-level requirement stated that the system “shall define, generate, and post compound general ledger debit and credit entries for a single transaction.” The system was also expected to “accommodate at least 10 debit and credit pairs,” but this information was not included in the process document for the Create Recurring Journals process, to which the requirement was tied. Therefore, someone implementing this functionality from this process document would not know the number of debit and credit pairs that must be supported. Furthermore, in April 2004, HHS conducted a demonstration for the users to validate that this functionality had been implemented. Although the demonstration documentation stated that this requirement would be covered, none of the steps in the test scripts actually demonstrated (1) how the system would process a general ledger entry that consisted of 10 debit and credit pairs or (2) examples of transactions that would require such entries. Since HHS has neither demonstrated the functionality nor defined what entries need to be supported, HHS does not yet have reasonable assurance the system can address this requirement. HHS expects that UFMS will be able to demonstrate compliance with the HHS high-level requirements as well as the derived requirements associated with its custom development through traditional testing approaches including demonstrations and validations. However, we found that as of May 2004, the necessary information to evaluate future testing efforts had not been developed for many of the requirements that we reviewed. HHS has conducted two conference room pilots that were to help determine and validate that the UFMS design and configuration meets HHS functional requirements. Such demonstrations, properly implemented, could be used to reduce the risks associated with the requirements management process weaknesses we identified. However, based on our review of the conference room pilots, the pilots did not (1) significantly reduce the risks associated with requirements management processes discussed above and (2) provide HHS with reasonable assurance that the functionality needed by its users had been implemented in UFMS. The first conference room pilot, held in August 2003, was designed to (1) demonstrate the functionality present in the COTS system that HHS believed could be used without modification and (2) identify any gaps in the functionality provided by the base system. The second conference room pilot in March and April 2004 was conducted to demonstrate the functionality present in the system that should be available for the October 2004 implementation at CDC. This demonstration was expected to show that the gaps in functionality identified in the first conference room pilot had been addressed. Problems with these demonstrations include the following: The IV&V contractor noted that some of the test scripts involved a number of requirements that were only partially addressed or not addressed at all. The IV&V contractor expressed concern that HHS would not be mapping these requirements designated as “fits” to test cases until system testing. According to the IV&V contractor, if some of the “fits” turn out to be “gaps” as a result of system testing, HHS may not have enough time to provide a solution without compromising the project schedule. In our observations of the second conference room pilot held in March and April 2004, we noted several cases in which the users were told that the system’s approach to address a given issue had not yet been defined but that the issue would be resolved before the system was deployed. One such issue was the process for handling erroneous transactions received from other systems. For example, procedures to correct errors in the processing of voucher batches had not been fully defined as of the demonstration. HHS officials stated that this would be addressed after this second conference room pilot. Additionally, during the demonstration it was unclear how five-digit object class codes used in the system will migrate to interfacing systems. We observed that four- digit object class codes from certain grant systems were cross-walked to five-digit object class codes when interfaced with the Oracle system. However, it was not clear how the data would be converted back to four- digit object class codes to flow back to the grant systems. The scripts used for the second conference room pilot did not maintain traceability to the associated requirements. In discussing our observations on the March and April 2004 conference room pilot, HHS officials stated that the conference room pilots were not a phase of formal testing but rather a structured working session (first conference room pilot) and a demonstration (second conference room pilot). However, they stated that the system test in August 2004—less than 2 months before the system is implemented at CDC—would verify that UFMS satisfies all requirements and design constraints. Staff members who made key contributions to this report were Linda Elmore, Amanda Gill, Rosa Harris, Maxine Hattery, Lisa Knight, Michael LaForge, W. Stephen Lowrey, Meg Mills, David Powner, Gina Ross, Norma Samuel, Yvonne Sanchez, Sandra Silzer, and William Thompson. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In June 2001, the Secretary of HHS directed the department to establish a unified accounting system that, when fully implemented, would replace five outdated accounting systems. GAO was asked to review HHS' ongoing effort to develop and implement the Unified Financial Management System (UFMS) and to focus on whether the agency has (1) effectively implemented disciplined processes; (2) implemented effective information technology (IT) investment management, enterprise architecture, and information security management; and (3) taken actions to ensure that the agency has the human capital needed to successfully design, implement, and operate UFMS. HHS has not followed key disciplined processes necessary to reduce the risks associated with implementing UFMS to acceptable levels. While development of a core financial system can never be risk free, effective implementation of disciplined processes can reduce those risks to acceptable levels. The problems that have been identified in such key areas as requirements management, including developing a concept of operations, testing, data conversion, systems interfaces, and risk management, compounded by incomplete IT management practices, information security weaknesses, and problematic human capital practices, significantly increase the risks that UFMS will not fully meet one or more of its cost, schedule, and performance objectives. With initial deployment of UFMS at the Centers for Disease Control and Prevention (CDC) scheduled for October 2004, HHS has not developed sufficient quantitative measures for determining the impact of the many process weaknesses identified by GAO and others to evaluate its project efforts. Without well-defined requirements that are traceable from origin to implementation, HHS cannot be assured that the system will provide the functionality needed and that testing will identify significant defects in a timely manner prior to rollout when they are less costly to correct. The agency has not developed the necessary framework for testing requirements, and its schedule leaves little time for correcting process weaknesses and identified defects. HHS has focused on meeting its predetermined milestones in the project schedule to the detriment of disciplined processes. If HHS continues on this path, it risks not achieving its goal of a common accounting system that produces data for management decision making and financial reporting and risks perpetuating its long-standing accounting system weaknesses with substantial workarounds to address needed capabilities that have not been built into the system. Accordingly, GAO believes these issues need to be addressed prior to deployment at CDC. Beyond the risks associated with this specific system development, HHS has departmental weaknesses in IT investment management, enterprise architecture, and information security. Because of the risks related to operating UFMS in an environment with flawed information security controls, HHS needs to take action to ensure that UFMS benefits from strong information security controls. HHS is modifying its IT investment management policies, developing an enterprise architecture, and responding to security weaknesses with several ongoing activities, but substantial progress in these areas is needed to prevent increased risks to cost, schedule, and performance objectives for UFMS. In human capital, many positions were not filled as planned and strategic workforce planning was not timely. HHS has taken the first steps to address these issues; however, ongoing staff shortages have played a role in several key deliverables being significantly behind schedule.
EPA’s 2009–2013 grants management plan contained five strategic goals that addressed weaknesses we identified in our 2006 grants management report related to: (1) demonstrating achievement of environmental results; (2) fostering a high-quality grants-management workforce; (3) enhancing the management process for grants policies and procedures; (4) standardizing and streamlining EPA’s grants management processes; and (5) leveraging technology to strengthen decision making and Across the five strategic goals, the plan increase public awareness.outlined 17 performance goals with targeted levels of performance or timeframes to hold agency officials accountable for achieving results (see table 1). EPA is currently developing a 5-year plan for 2016–2020. Similar to its 2009–2013 plan, EPA’s draft plan also addresses five strategic goals: (1) maintaining an effective grants management policy, (2) streamlining grants management procedures, (3) fostering a high-quality grants management workforce, (4) ensuring transparency and demonstrating results, and (5) evaluating grants management performance (see app. III). As we have previously found, in developing new initiatives, agencies can benefit from following leading practices for strategic plans.GPRA was enacted to improve Federal program effectiveness and public accountability, among other purposes, and established a system for agencies to set goals for program performance and to measure results. In 1993, The statutory framework for performance management in the federal government was updated with the GPRA Modernization Act of 2010 (GPRAMA). OMB’s Circular A-11 provides guidance to agencies on how to prepare these plans in accordance with GPRA and GPRAMA requirements. We found that these requirements also can serve as leading practices at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. We identified 17 leading practices related to strategic planning and selected 4 based on their applicability to (1) program-level strategic planning, (2) the content of the plan rather than the planning process, and (3) grants management. See table 2 for selected leading practices in federal strategic planning. Of the 17 performance goals in its 2009–2013 grants management plan, EPA fully met 2, partially met 6, and did not meet 1, according to our review of EPA grants management performance data. EPA did not measure its progress for the remaining 8 performance goals, according to OGD responses to our standard set of questions. EPA made the most progress toward achieving the performance goals under its strategic goals of standardizing and streamlining the grants business process and enhancing the management process for grants, according to our review of EPA data, planning documents, and OGD responses to a standard set of questions. Specifically, under its strategic goal of standardizing and streamlining the grants business process, EPA fully met 2 of its 6 performance goals—share of grants awarded in a timely manner and share of eligible dollars and awards that were competitively awarded. EPA partially met 3 other performance goals under this strategic goal—share of expired grants closed out in a timely manner, reduction in the amount of unexpended expired funds, and share of applications processed in a timely manner—because it met its performance goal for many, but not all of the years from 2009-2013. In addition, EPA partially met 2 of its 3 performance goals—reviewing guidance for consistency and completing a comprehensive guidance manual—under its strategic goal of enhancing the management process for grants policies and procedures. EPA partially met these performance goals because it completed the activities under both goals but missed the deadline in its grants management plan for these goals by 2 years and 4 Finally, EPA partially met its goal of migrating its IT years, respectively.system to a governmentwide grants management system. Specifically, EPA analyzed system alternatives but did not implement a new system by the March 2012 deadline in the GMP. In addition, EPA did not meet 1 performance goal—preparing a long-term training plan—under the strategic goal of fostering a high-quality workforce. According to OGD officials, OGD determined that it did not have available resources to develop a long-term training plan. However, OGD reported that it took steps to mitigate any negative effect by implementing new training tools, such as webinars, online training, and lectures to meet the agency’s needs and provide more efficient and flexible methods for a changing training environment. For the remaining 8 performance goals, which span all five strategic goals, EPA did not measure its performance. Specifically, EPA did not measure its performance for 2 performance goals under its strategic goal of demonstrating achievement of environmental results, 2 of the 3 performance goals under its strategic goal of fostering a high-quality grants management workforce, 1 of the 3 performance goals under its strategic goal of enhancing the management process for grants policies and procedures, 1 of the 6 performance goals under its strategic goal of standardizing and streamlining the grants business process, and 2 of the 3 performance goals under its strategic goal of leveraging technology to strengthen decision making and increase public awareness. Table 3 shows the status of EPA’s 17 performance goals and our assessment of whether they met, partially met, or did not meet the goal. EPA officials provided five reasons why, for 15 of the 17 performance goals, the agency either did not measure (8), partially met (6), or did not meet (1) goals. These five reasons included redirected resources, process delays, IT constraints, budget constraints, and errors requiring rework (see app. II for more detail). Redirected resources. According to OGD responses to our standard set of questions, of the 8 performance goals not measured, EPA did not measure 5 because it redirected some of its grants management resources to managing American Recovery and Reinvestment Act of 2009 (ARRA) funds. Under ARRA, EPA more than doubled its grants awards from $3.7 billion in fiscal year 2008 to $9.8 billion in 2009. Although ARRA provided EPA with additional funds to manage these grants, the EPA OIG found that the additional workload for ARRA activities impacted non-ARRA work. According to OGD responses to a standard set of questions, for all 5 performance goals, EPA had to redirect resources from implementing its grants management plan to meet additional requirements under ARRA. For example, EPA typically monitors key aspects of grants annually, but for ARRA grants, EPA required routine monitoring every 90 days to support ARRA quarterly reporting requirements and required more in- depth monitoring twice a year, according to an agency assessment of EPA’s ARRA activities. Of the 5 performance goals that EPA did not measure, 2 addressed increasing the share of state workplans and progress reports consistent with EPA’s environmental results directives (OGD responses show that budget constraints were also a factor); 2 addressed increasing staff satisfaction with EPA’s performance appraisal system and available IT tools; and 1 addressed increasing the share of grants management staff with performance plans that include grants management (OGD responses show that budget constraints were also a factor). Process delays. For 4 performance goals that EPA either partially met (3) or did not measure (1), OGD responses to our standard set of questions and supporting documents indicate that it was because the activities took longer than expected. For example, 1 performance goal was for EPA to migrate its IT system to a governmentwide grants management system by March 31, 2012. OGD responses and documented analysis of EPA’s IT systems show that the agency partially met this performance goal because the systems EPA initially identified either did not meet its needs or were too expensive. As a result, the agency had to identify and analyze alternative systems, which led to EPA missing its 2012 deadline. Delays in approving and revising policies also caused EPA to partially meet 2 performance goals related to its strategic goal of enhancing the management process for grants policies and procedures, according to OGD responses to our standard set of questions. For the performance goal that EPA did not measure, OGD responses state that the agency did not measure its performance goal for improving training timeliness because approving policies took longer than expected. IT constraints. According to OGD responses to our standard set of questions, EPA did not measure (2) and partially met (1) of its performance goals because of IT constraints. Specifically, EPA did not measure the share of grants that receive routine monitoring annually because its IT system could not sufficiently track the variation in due dates for individual awards. In addition, EPA did not measure its performance goal to increase the share of certain state and tribal grants offered via Grants.gov because the website could not handle the increased demand during ARRA implementation, according to OGD responses and supporting documents. EPA partially met its performance goal to reduce unexpended expired funds because EPA officials said that their IT system could not produce accurate data for the end of the fiscal year in 2013. Budget constraints. EPA partially met 1 performance goal and did not meet 1 goal due to budget constraints, according to OGD responses to our standard set of questions and supporting documents. Specifically, EPA partially met its goal to improve the share of expired grants closed in a timely manner because it redirected resources from closing grants due to furloughs associated with sequestration, according to OGD responses and supporting documents. EPA did not meet its performance goal to develop a long- term training plan because the agency determined that it did not have the resources to do so, according to OGD responses and budget data. For example, OGD’s workforce decreased from 79 full-time equivalent staff in 2009 to 71 full-time equivalent staff in 2013. Errors requiring rework. EPA officials said that EPA partially met its performance goal to increase the share of applications processed in a timely manner, because errors in processing application packages required the agency to rework several packages. According to EPA officials and agency workload analyses, in some cases, these errors resulted from a large workload for staff who also managed ARRA grants. In other cases, staff that had a small grants management workload made errors because they were not familiar with policy and IT requirements. These officials said that the complexity of some grant projects was also a factor. For five performance goals, EPA and OGD reported that not meeting or not measuring the goals did not affect EPA’s grants management activities because the agency either mitigated the potential negative effect of missing the performance goal, or the negative effect was minimal, according to our analysis of OGD responses and supporting documents (see app. II). For example, EPA reported that it mitigated the potential effect of missing EPA’s performance goal to increase the share of grants management staff whose performance plans include grants management. Specifically, EPA determined that it could build staff’s grants management activities into their performance by providing staff managers with (1) additional performance guidance and (2) individual and agency-level grants management performance data for comparison, according to OGD responses and EPA performance guidance documents. For another performance goal, to mitigate the effect of not measuring staff satisfaction with an IT application, EPA addressed the low ratings of the IT application’s operation in its 2010 baseline survey by adding more user-friendly features, such as a web-based reporting tool, according to OGD responses to our standard set of questions, and an internal memorandum. For its performance goal of developing a long-term training plan, EPA mitigated negative effects of missing its goal by implementing new training tools such as webinars, online training, and lectures to meet the agency’s training needs and provide more efficient and flexible methods for a changing training environment, according to OGD responses and training documents. EPA’s actions are consistent with our March 2004 guidance for assessing training, which states that agencies should modify their efforts to fit their unique circumstances and conditions. For the other two performance goals, the effect of not measuring them was minimal, according to our review of OGD responses to our standard set of questions. Specifically, EPA did not measure employee satisfaction with its performance appraisal system, which likely had a minimal effect on grants management activities. Similarly, for its performance goal to increase the share of expired grants closed out in a timely manner, EPA missed the target in 2013 by less than 1 percent, which also likely had a minimal effect on grants management. However, for 10 performance goals, our review of OGD responses to our standard set of questions and supporting documents found negative effects of the agency not measuring or partially meeting them, as follows: Limited the agency’s data on compliance. For three performance goals, not measuring them led to the absence of agencywide data on compliance with directives intended to ensure that grant funds achieve the desired results of protecting human health and the environment, according to OGD responses to our standard set of questions and supporting documents. Specifically, because of ARRA demands and budget constraints, EPA did not measure its two performance goals to increase grant recipients’ workplans and progress reports consistent with EPA’s environmental results directive, according to OGD responses and agency planning documents. Additionally, although EPA collects real-time data on compliance with routine monitoring requirements, because of IT constraints, EPA did not measure its performance goal on the share of grants that receive routine monitoring annually on a cumulative basis, according to OGD responses. As a result, EPA does not have a complete picture of its compliance with certain directives—directives that are designed to ensure that funds are used appropriately and achieve the desired results. EPA planning documents indicate the agency plans to review state workplans and progress reports in fiscal year 2017, and EPA officials said that they are looking into capturing annual data on compliance with routine monitoring requirements. Inefficient processes. For two performance goals, not measuring or partially meeting them led to less efficient processes remaining in place, according to OGD responses to our standard set of questions and supporting documents. For example, EPA did not establish a baseline or measure its performance goal on the share of certain state and tribal grants offered via Grants.gov, according to OGD responses. Consequently, some applicants continued to apply for these grants by e-mail or on paper, according to OGD responses and policy documents. Additionally, EPA partially met its performance goal to migrate its IT system to a governmentwide grants management system, which led to EPA’s continued use of an IT system that EPA IT analyses state is aging, inefficient and, in some cases, requires data entry in multiple databases to document a single action. According to these analyses, these inefficient processes result in a greater workload for a grants management workforce that is already strained. Limited access to information. For two performance goals, partially meeting them led to limited access for grants managers to accurate information on grants management directives, according to OGD responses to our standard set of questions. First, EPA partially met its performance goal to develop comprehensive guidance (i.e., a single manual containing all current guidance on grants management policies) but not by 2009. Specifically, agency documents show that EPA’s comprehensive guidance manual was last substantially updated in 1988, and sections of the manual were out of date. Although EPA issued a series of policy updates on EPA’s website, these updates were not part of the manual.they communicated changes as they happened through EPA’s internal website; however, OGD responses recognize that not having comprehensive guidance negatively affected grants managers’ Agency officials said that timeliness in processing grants. In 2013, EPA integrated and updated all of its guidance online, providing grants managers access to up-to- date information on changing grants requirements, according to OGD responses and an internal memorandum. Second, EPA partially met its performance goal to review all of its guidance for consistency by 2011 and 2012, because it did not complete its review until 2013. As part of its review, EPA policy documents show that the agency identified 25 cases in which its policy updates were incorrect and included obsolete or redundant policies. Therefore, because EPA did not meet this goal in a timely manner and, until it completed its review, identified errors, and corrected its guidance, grants managers did not have readily available, accurate information. Delayed process for awarding grants. In another instance, partially meeting its performance goal for increasing the share of applications processed in a timely manner increased the amount of time it took for EPA to provide funding to recipients, according to OGD responses to our standard set of questions and performance data. As a result, from 2010 to 2013, EPA did not process grant application packages (commitment notices) within its 60-day target for an additional 10 percent of EPA applications, according to EPA performance data. Delayed training and policy implementation. According to OGD responses to our standard set of questions and EPA policy and training documents, EPA did not measure its performance for improving the timeliness of training and, in some instances, EPA provided training to grants management officials for a new policy after the policy was already in effect. This was not consistent with EPA’s grants management plan that called for EPA to offer training on new policies at least 4 weeks prior to implementing them. Although EPA officials said that the negative effect in these cases was minimal because the grant workload in October was low, without training prior to these policies’ effective dates, EPA does not have reasonable assurance that grants management officials applied the policies consistently from the dates that they went into effect. In addition, EPA policy documents show that, in some cases, EPA delayed the implementation of new policies designed to simplify and streamline the grants process for recipients to accommodate the training schedule. Because of these delays, recipients could not immediately benefit from these policy improvements as originally planned, according to OGD responses to our standard set of questions and supporting documents. Inefficient use of grant funds. EPA could not confirm whether it met its performance goal for reducing unexpended expired funds in 2013 and was $900,000 from meeting its goal in early September 2013. According to EPA’s OIG, unexpended expired funds are missed opportunities for EPA and grant recipients to efficiently fund projects and efforts that meet EPA’s mission of protecting human health and the environment. As of May 2015, EPA’s November 2014 version of its draft 2016–2020 grants management plan partially follows four selected leading practices for federal strategic planning that we identified from prior GAO work and OMB guidance (see table 4). EPA officials said that they designed their 2016–2020 plan to be more high-level than the 2009–2013 plan so that it would be more flexible and adaptive to changing circumstances, such as legislation that changes EPA priorities. For example, unlike some of the objectives in the 2009–2013 plan, EPA does not prescribe how the agency should meet the objectives in its draft 2016–2020 plan, to give the agency discretion in choosing the most efficient implementation method, according to EPA officials and our analysis of the draft plan. As shown in table 4, our analysis of EPA’s draft plan indicates that, as of May 2015, it partially incorporated four leading planning practices relevant to grants management: Define the mission and goals. EPA’s draft plan partially follows this leading practice in that it defines five strategic goals, which explain the grants management program’s purpose and the results that the agency intends to achieve. However, as of May 2015, the agency does not yet link these goals to an overarching mission statement. According to leading strategic management practices, a mission statement explains why the program exists, what it does, and how. We have previously found that a mission statement forms the foundation for a coordinated, balanced set of strategic goals and performance measures. Agency officials said that EPA has not yet incorporated a mission statement into its draft plan because the agency is awaiting stakeholder agreement on the underlying framework but that it plans to do so. Ensuring that its plan has a mission statement could help EPA better establish a framework to guide effectively the agency’s overall vision for grants management. Define strategies and identify resources needed to achieve goals. EPA’s draft plan partially follows this leading practice because it includes strategic objectives, but as of May 2015, the plan does not yet define strategies that address management challenges, include milestones for significant activities, or identify the resources necessary for the agency to achieve its strategic goals. We have previously found in leading federal strategic planning practices that it is particularly important for agencies to define strategies that address management challenges that may threaten their ability to meet long- term goals and include a description of the budgetary and human resources, actions, and time frames needed to meet these goals. EPA officials said that they would discuss the resources needed to achieve their goals but had not considered including a discussion of resources in their draft plan. By also including in its draft plan strategies for addressing the management challenges facing the agency and the resources needed to achieve its goals, EPA could better ensure that its staffing and funding are sufficient to achieve those goals. The agency could also better prepare for future changes in workload or funding—problems that we found had constrained the agency in the past. Ensure leadership accountability. EPA’s draft plan partially follows this leading practice in that one of the strategic goals is dedicated to evaluating the agency’s performance at managing its grants, which incorporates a degree of accountability into the plan. According to leading federal strategic planning practices, successful organizations use formal and informal practices to hold managers accountable and create incentives for working to achieve the agency’s goals. However, as of May 2015, the other four goals in the draft plan do not yet include mechanisms to hold EPA managers accountable for achieving the agency’s goals. For example, in 2012, we found that EPA had ensured leadership accountability for its environmental justice strategic plan by giving senior administrators lead responsibility for implementing the plan, and incorporating relevant environmental justice measures in its annual national program guidance. By including mechanisms to hold managers accountable for the other four strategic goals, EPA will be better positioned to ensure that the grants program achieves its goals. Develop and use performance measures. EPA’s draft plan partially follows the leading practice to develop and use performance measures in that it has 11 performance measures but, as of May 2015, only one performance measure has a measurable or numeric target associated with it. According to leading practices, performance measures gauge the agency’s progress toward its mission and strategic goals. They provide information on which the agency can base decisions and create incentives that influence organizational and individual behavior. We have previously found that one of the key attributes for successful performance measures is a measurable target and that such measurable targets could challenge the agency to improve its results. According to agency officials, EPA is planning to develop more measurable targets as part of an annual priority planning process but, at the time of this report, that effort was not yet complete. As of May 2015, the draft 2016–2020 plan provides a road map that builds on EPA’s progress standardizing and streamlining the grants management process since 2009 and may help the agency continue to work toward the goals set out in the 2009–2013 plan. We have previously found that a primary purpose of federal strategic planning is to improve federal agency management.practices for federal strategic planning, EPA could have better assurance that it has established a framework to effectively manage and assess efforts to accomplish its grants management strategic and performance goals, without reducing the plan’s flexibility, and that may help the agency address its long-standing grants management weaknesses. EPA has made some progress monitoring its compliance with seven selected postaward grants management directives—such as those dealing with compliance, review, and monitoring, and achieving environmental results from EPA grants—agencywide, but it continues to face two key challenges. Specifically, since our 2006 report, OGD has begun monitoring more grants management directives agencywide through its IT systems, such as tracking unexpended grant funds and grantees’ timely submission of reports. However, two key challenges hamper EPA’s efforts to monitor directives agencywide: (1) most of its regional offices rely on paper files and (2) its IT systems have limited reporting and analytical capabilities. Since 2006, OGD has developed the ability to monitor EPA’s compliance with certain requirements in its grants management directives electronically. For example, EPA has been monitoring administrative activities of grant recipients, unexpended grant funds, and whether grant recipients have submitted their final reports on time. As part of this monitoring, OGD tracks the number of grants for which program officers and grants specialists completed routine annual monitoring, as well as the percentage of grants that received such monitoring against the agency’s performance goal. OGD tracks this information in real time, which provides a snapshot of routine annual monitoring activities. Additionally, OGD monitors the number, dollar amount, and percentage of total unspent grant funds for headquarters and all EPA regions. OGD tracks the percentage of grantees’ final technical reports received in a given fiscal year and compares it to the agency’s performance goal. OGD officials also electronically review and verify certain administrative monitoring actions, such as the percentage of closed-out grants for a given calendar year. However, EPA faces two key challenges in its agencywide monitoring efforts. First, 8 of its 10 regional offices document their compliance with grants management directives in paper files, according to EPA officials. One additional regional office uses electronic record-keeping for interagency agreements, but not for grants. with grants management directives generally requires paper file reviews, which agency officials described as resource-intensive. As a result, EPA officials told us that they deferred some planned compliance reviews due to budget constraints. According to EPA officials, the agency recognizes that paper records are outmoded and plans to transition to electronic records management, but the officials did not provide a timetable for completing this transition. EPA officials stated that EPA headquarters is currently using an electronic grant records system, three regions have agreed to develop electronic grant records systems, and EPA is encouraging the remaining regions to adopt such systems; however, according to OGD officials, the remaining regions are unable to do so due to budget constraints. Second, we found that limitations in OGD’s IT systems’ reporting and analysis capabilities mean that the systems do not produce comprehensive, agencywide summary information for most of the directive requirements we reviewed. This prevents managers from comparing actual performance to expected results agencywide and analyzing significant differences consistent with the federal standards for internal control. In 2009 and 2011, EPA deployed two web-based reporting systems to pull certain information from its databases for analysis; however, as of June 2015, EPA used these tools to monitor 8 percent (17 of 212) of its requirements. OGD officials stated that they have the capability to use their current web-based tools more broadly. However, they have not done so. According to OGD officials, their process for determining which requirements to track agencywide using these web-based tools is to follow the measures in their grants management plan, as well as to take into account the results of OIG and GAO audits. In addition to the limited agencywide information, OGD’s IT systems require staff to manually review information entered into the database to ensure its accuracy and completeness for most of the requirements (117 of 212) in the seven management directives we reviewed. However, such manual reviews are not consistent with federal standards for internal control, which call for control activities specific for information systems, including computerized edit checks built into the system to review the format, existence, and reasonableness of data (e.g., accuracy and completeness). According to EPA officials, the agency currently plans to adopt an updated grants management system by 2017, and it is transitioning to the new system in phases that correspond to the grants lifecycle. The agency’s 2016 draft grants management plan incorporates IT system improvements suggested in a 2014 study of EPA’s grants management process. As part of its plans for an updated grants management system, EPA has included establishing a single official electronic file to house all grant information for each individual grant. However, EPA has planned to update its grants management IT system since its 2009 plan but has not yet done so. According to EPA officials, analyzing IT alternatives took longer than they expected and they had to reprioritize their grants management activities due to the additional workload required under ARRA. Nonetheless, agency officials said that the IT system that they plan to implement will save the agency $27 million. In the meantime, however, EPA has limited information on its agencywide compliance with certain grants management directives intended to provide internal controls over how funds are used and how results are obtained. Better monitoring of agencywide compliance with these directives through electronic record-keeping and using its existing web- based tools more effectively could help EPA better meet federal standards for internal control and help ensure that funds reach grantees quickly, are used appropriately, and achieve the desired results of protecting human health and the environment. EPA has developed several strategies for addressing its past challenges managing the roughly several billion dollars it distributes each year in grants to help protect human health and the environment. EPA incorporated these strategies into its 2009–2013 plan and made some progress toward achieving its strategic goals. To build on its progress, EPA has developed a draft 2016–2020 grants management plan which, as of May 2015, partially follows several leading strategic planning practices but has not yet included certain key elements, such as defining strategies that address management challenges that may threaten EPA’s ability to meet long-term goals and identifying the resources, actions, and time frames needed to meet these goals. We recognize that EPA is designing its draft 2016–2020 plan to be more flexible and adaptable to changing circumstances to address some of the constraints that prevented the agency from meeting all of its 2009–2013 plan goals. Nonetheless, our past work shows that incorporating more leading practices into the final plan could provide EPA with reasonable assurance that it has established a framework to effectively guide and assess efforts to accomplish its grants management goals, without reducing its flexibility. Doing so could also help EPA address long-standing grants management weaknesses, such as tracking environmental results. Since the draft plan is still under development, EPA has the opportunity to incorporate more of these selected leading practices into the final plan. EPA has made progress monitoring its compliance with certain grants management directives agencywide, yet the two key challenges it faces— dependence on paper files and limitations in its IT systems—continue to hamper its ability to monitor certain requirements agencywide. EPA plans to transition to electronic records management for all 10 of its regional offices, but it does not have a timetable for doing so, and some regional offices have not implemented electronic records management due to budget constraints. EPA also currently plans to adopt an updated grants management system by 2017, and it has incorporated addressing potential IT improvements as part of its draft 2016–2020 plan. However, EPA has had similar plans to improve its IT system since 2009 but has not done so because the systems the agency initially identified either did not meet its needs or were too expensive, resulting in a need to identify and analyze alternative systems. In the meantime, EPA has made limited use of its existing web-based tools for analyzing and reporting agencywide compliance, in part because it has focused its analytical efforts on the measures in its grants management plan. By using existing web-based tools more effectively until it implements its new IT system, EPA can better monitor agencywide compliance with grants management directives. We recommend that the EPA Administrator direct OGD to take the following four actions: Incorporate all leading practices in federal strategic planning relevant to grants management as it finalizes its draft 2016–2020 grants management plan, such as defining strategies that address management challenges that may threaten the agency’s ability to meet long-term goals and identifying the resources, actions, and time frames needed to meet these goals. Develop a timetable with milestones and identify and allocate resources for adopting electronic records management for all 10 regional offices. Implement plans for adopting an up-to-date and comprehensive IT system by 2017 that will provide accurate and timely data on agencywide compliance with grants management directives. Until the new IT system is implemented, develop ways to more effectively use existing web-based tools to better monitor agencywide compliance with grants management directives. We provided a draft of this product to EPA for comment. In its written comments, reproduced in appendix IV, EPA generally agreed with our findings and recommendations, with the following exceptions. With respect to our recommendation that EPA implement plans for adopting an up-to-date and comprehensive IT system by 2017 that will provide accurate and timely data on agencywide compliance with grants management directives, EPA agreed with the recommendation except with the 2017 completion date. EPA said that the agency will need time to prioritize which grants management directives requirements to include in the system, which IT approaches to take, and to identify resources through the budget process. EPA said that implementation is therefore likely to extend beyond 2017. The 2017 completion date is based on an EPA internal planning document, which stated that the agency currently plans to adopt an updated grants management system by 2017. We also note that EPA had difficulty meeting its deadlines from its 2009–2013 Grants Management Plan. We continue to believe that EPA should implement an up-to-date and comprehensive IT system as expeditiously as possible to improve agencywide oversight of the several billions of dollars the agency distributes each year. EPA also disagreed with our conclusion that long-standing grants management weaknesses continue to exist, such as tracking environmental results, and that EPA, with the concurrence of the OIG, eliminated long-standing grants management as a material or agency weakness in 2007. The grants management weaknesses referenced in our conclusions are issues that the OIG found since EPA issued its 2009– 2013 grants management plan and not those material or agency weaknesses that were eliminated in 2007. We continue to believe this conclusion is valid. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines (1) the extent to which the Environmental Protection Agency (EPA) met the performance goals in its 2009–2013 grants management plan; (2) the extent to which EPA’s 2016–2020 draft grants management plan follows selected leading practices for federal strategic planning; and (3) the progress, if any, EPA has in made in monitoring compliance with grants management directives agencywide. To examine the extent to which EPA met the performance goals in its 2009–2013 grants management plan, we reviewed the plan. We collected data and requested responses from the Office of Grants and Debarment (OGD) for a standard set of questions on EPA’s progress in achieving its 17 performance goals, including officials’ explanations of effects, if any, from not meeting these goals and steps EPA took to mitigate the reported effects. We compared OGD responses with supporting documentation provided by agency officials, such as policies, internal briefings, EPA analyses of its information technology systems, and other documents. As part of OGD responses, OGD officials provided data on the performance goals EPA measured, which we reviewed. To assess the reliability of the data, we compared EPA data against supporting documents provided by agency officials and determined that the data were sufficiently reliable for our reporting purposes. For one performance goal, EPA could not provide accurate data as of the end of fiscal year 2013, so we used EPA-reported data through September 3, 2013, which we note in the report. We also interviewed OGD management and staff. To examine the extent to which EPA’s 2016–2020 draft plan follows leading practices for federal strategic planning, we reviewed the draft plan We then identified that agency officials provided from November 2014.leading practices from the Government Performance and Results Act of 1993 as enhanced by the GPRA Modernization Act of 2010 (GPRAMA), Office of Management and Budget (OMB) guidance, and prior GAO work. We have previously reported that strategic planning requirements at the federal department/agency level and practices identified by GAO can also serve as leading practices for planning at lower levels within federal agencies, such as individual programs or initiatives. We identified 17 leading practices related to strategic planning and selected 4 based on their applicability to (1) program-level strategic planning, (2) the content of the plan rather than the planning process, and (3) grants management. Based on these selection criteria, we excluded 9 practices because they overlapped with other practices, excluded 2 practices because they focused on process rather than the plan’s content, and excluded 2 others because they were not relevant to grants management or program-level strategic planning. We then compared the draft plan EPA officials gave us from November 2014 with these 4 selected leading practices. We assessed the extent to which the draft plan followed each of the elements of these four practices, and interviewed EPA officials involved with the draft plan. To examine EPA’s progress in monitoring compliance with grants management directives agencywide, we identified 24 management directives that help EPA implement relevant statutes, regulations, and EPA policies and procedures. Nine of these directives were relevant to the areas where we had previously identified weaknesses, such as ongoing monitoring of grant activities, tracking environmental results, and timely grant closeouts. We selected 7 of those 9 grants management directives. We excluded the other 2 directives because they applied to nonprofit and tribal grant recipients and therefore did not apply to most grant recipients. From these 7 directives, we selected 212 requirements that involved (1) the completion of tasks, (2) the content of tasks, (3) the documentation of tasks, and (4) EPA’s review of tasks. We excluded the remaining directive requirements because they are not the responsibility of OGD and exist outside of EPA’s grants management databases or official grant files. We then compared the 212 requirements to the requirements tracked in EPA’s agencywide grants management systems. Specifically, we examined the requirements in EPA’s Integrated Grants Management System (IGMS) and Grantee Compliance Database and in its web-based State Grant IT Application. We also evaluated the information in the two web-based systems that EPA uses to pull data from IGMS for analysis, Datamart and Quikreports. We conducted this performance audit from November 2014 to August 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 5 through 9 show the Environmental Protection Agency’s (EPA) progress against the performance goals in its 2009–2013 grants management plan. These tables incorporate Office of Grants and Debarment’s (OGD) responses to our standard set of questions on its progress against the 17 performance goals in its 2009–2013 grants management plan. We provided these questions to OGD officials to complete, including officials’ explanations of effects, if any, from not meeting these goals and steps EPA took to mitigate the reported effects. We compared OGD responses with supporting documentation provided by agency officials, such as policies, guidance, EPA analyses of its information technology (IT) systems, and other documents. Tables 10 through 14 show the Environmental Protection Agency’s (EPA) objectives and performance measures in its draft 2016–2020 grants management plan, as of November, 2014. In addition to the individual named above, Michael Hix (Assistant Director), Elizabeth Curda, Ellen Fried, Cindy Gilbert, Thomas James, Jerry Leverich, Benjamin Licht, Gary Mountjoy, Jonathan Munetz, Alison O’Neill, Kiki Theodoropoulos, and Lisa Van Arsdale made key contributions to this report.
In 2014, EPA disbursed about $4.6 billion in grants through its headquarters and 10 regional offices to states and others, in part to implement laws. In 2006, GAO identified weaknesses in EPA's grants management program, including the absence of goals, and made recommendations to address them. As part of its response to GAO's 2006 recommendations, EPA issued a 2009-2013 grants management plan. GAO was asked to follow up on its 2006 review. This report examines (1) the extent to which EPA met the goals in its 2009–2013 plan, (2) the extent to which its draft 2016–2020 plan follows relevant leading practices for strategic grants management planning, and (3) the progress EPA has made since 2006 in monitoring agencywide compliance with grants directives. GAO analyzed EPA's 2009–2013 plan and obtained EPA officials' responses to a standard set of questions regarding progress in achieving the goals; compared the draft 2016–2020 plan to four leading strategic planning practices relevant to grants management; compared 212 requirements from relevant grants directives to requirements tracked in EPA's grants management systems; and interviewed agency officials. Of the 17 performance goals in its 2009–2013 grants management plan, the Environmental Protection Agency (EPA) fully met 2, partially met 6, and did not meet 1. EPA did not measure its progress for the other 8 goals. EPA officials provided several reasons for meeting relatively few of the performance goals and not measuring the others. For example, according to officials, EPA did not measure progress for some goals because it redirected resources from achieving grants management goals to managing American Recovery and Reinvestment Act of 2009 grants, under which EPA more than doubled its grants in 2009. For 5 goals where EPA either did not meet the goal or did not measure performance, officials reported that there was no impact on the grants management program because EPA took mitigating actions or the negative effect of missing the goal was minimal. However, for 10 goals, GAO found a negative effect of EPA not measuring or partially meeting the goals, including an absence of data on compliance with policies, inefficient processes that increased workload, delayed processes for awarding grants, and delayed training and policy implementation. As of May 2015, EPA's draft 2016–2020 grants management plan partially follows four relevant leading practices for federal strategic planning that GAO identified from prior work and Office of Management and Budget (OMB) guidance. Specifically, the draft plan sets 5 strategic goals but has yet to link them to an overarching mission statement, includes strategic objectives but has yet to define strategies to address management challenges or identify resources needed to achieve the goals, ensures leadership accountability for just 1 of the 5 strategic goals, and includes 11 performance measures but has so far only one measurable target. By fully incorporating these leading practices, EPA could have better assurance that it has established an effective framework to guide and assess its efforts to meet its grants management goals and help address long-standing grants management weaknesses. EPA has made progress monitoring grants management directives agencywide since GAO's 2006 report. For instance, EPA electronically tracks unspent grant funds and the timely submission of grantee reports. However, two key challenges hamper EPA's efforts to monitor such directives. First, 8 out of 10 regional offices use paper files to document compliance with grants management directives, so monitoring these offices' compliance requires resource-intensive manual file reviews. Second, the limited reporting and analysis capabilities of its IT systems leave EPA without agencywide information for most of the 212 directive requirements GAO reviewed. Although EPA deployed two web-based reporting tools to pull data from its IT system, it uses them to track 8 percent, or 17, of the 212 grants directive requirements GAO reviewed, making it difficult for managers to compare actual performance to expected results agencywide. EPA plans to fully implement an updated IT system by 2017, but it has had similar plans since 2009 and has not yet done so. By developing ways to more effectively use existing web-based tools until it implements its new IT system, EPA could better monitor compliance with grants management directives agencywide. GAO recommends, among other things, that EPA fully follow leading strategic planning practices in its draft 2016–2020 plan and develop ways to more effectively use its web-based tools for monitoring compliance with directives. EPA generally agreed with GAO's findings and recommendations.
The word hunger has several meanings—it can describe, for example, one’s desire for food; the painful sensation or state of weakness caused by the need for food; or famine. While severe hunger—manifesting as clinical malnutrition—is uncommon in this country, millions of children and adults who lack resources go without food and many are undernourished. The mental and physical changes that accompany inadequate food intake and even minor nutrient deficiencies can have negative effects on learning, development, productivity, physical and psychological health, and family life. In 1995 USDA’s Economic Research Service—through the nationally representative CPS Food Security Supplement—began tracking the number of households that are uncertain of having or unable to acquire enough food because they lack resources, and uses the term low food security or very low food security, not hunger, to describe these households. USDA adopted these terms in response to recommendations by a National Academies panel, which found the term hunger to be inappropriate when describing low-income households that lack enough food both because of the difficulties in measuring hunger and because hunger has physiological definitions that do not necessarily correspond to nutritional insufficiency. USDA monitors the food security status of U.S. households as part of its responsibility for administering most of the federal government’s food and nutrition assistance programs, many of which are intended to alleviate food insecurity and prevent the physical and psychological outcomes—such as low birth weights, chronic illnesses, and anxiety—associated with being undernourished. To be consistent with USDA, this report uses the terms low food security, very low food security, and food insecure. (See table 1 for definitions of these terms.) The annual CPS Food Security Supplement collects data on the prevalence and severity of food insecurity by asking one adult in each household a series of questions about experiences and behaviors of household members that indicate food insecurity. The food security status of the household is assessed based on the number of food-insecure conditions reported, such as being unable to afford balanced meals and being hungry because there was too little money for food. Food-insecure households are classified as having either low food security or very low food security (see table 1). In addition, the survey assesses the food security status of households with children. The federal government has been helping needy individuals and families access food for more than 60 years. The National School Lunch Program, for example, was authorized in 1946 and became one of the first large- scale food and nutrition assistance programs. Other federal programs followed, including the School Breakfast Program (founded by the Child Nutrition Act of 1966) and WIC, authorized in 1972. Over time, some programs have changed. For example, according to USDA, an early version of SNAP (formerly the Food Stamp Program) required eligible individuals to pay for a portion of their orange-colored stamps, which they could use for any kind of food. In addition, this early version provided eligible individuals with free blue stamps, equal to half the amount of the orange stamps, to buy designated surplus foods. Today, SNAP recipients now receive their benefits on electronic benefit transfer cards and no longer use actual stamps to purchase food. The federal government currently funds close to 70 programs that are permitted to provide at least some support for domestic food assistance. In our study, we identified the 18 programs that focus primarily on providing food and nutrition assistance to low-income individuals and households. (See table 2.) The 18 programs we studied vary by target population, size, types of benefits, and where these benefits are provided: Target population. While the 18 programs serve four broad populations— individuals and households, children, the elderly, and special groups—the specific target populations vary across programs. For example, SNAP helps low-income individuals and families; the National School Lunch Program assists school-aged children; the Elderly Nutrition Program serves individuals 60 years of age and older; and WIC provides assistance to low-income, nutritionally at-risk children up to age 5 and pregnant and postpartum women. Program size. The 18 programs also vary in size, ranging from the Food Distribution Program on Indian Reservations, which serves approximately 90,000 individuals per month, to SNAP, which serves more than 28 million people per month. Benefit type. In addition, the programs differ by the types of benefits they provide. Some programs—such as SNAP—were designed to help low- income individuals and families obtain a nutritious diet by supplementing their income with cash-like benefits to purchase food, such as meat, dairy products, fruits, and vegetables, but not items such as certain hot foods, tobacco, or alcohol. Other programs provide food directly to program participants. The Emergency Food Assistance Program supplies large quantities of food to governmental or nonprofit organizations to prepare meals for or distribute food to individuals and families. The National School Lunch Program reimburses school districts for the meals served and provides some commodities from USDA to offset the cost of food service. Other programs do not directly provide benefits to individuals. For example, the Community Food Projects Competitive Grants Program provides grants to organizations to plan or implement projects to improve access to food for low-income individuals and families. Program administration. USDA, DHS, and HHS fund all of the 18 programs through a decentralized service delivery structure of state and local agencies and nonprofit organizations. For example, WIC benefits are typically delivered through state agencies to state and county health departments; the Child and Adult Care Food Program works through state agencies to subsidize child care providers, day care homes, and adult day care facilities; and the Commodity Supplemental Food Program provides food to state agencies, which then distribute the food to local nonprofit organizations that provide it to recipients. Each federal food and nutrition assistance program has its own set of program goals that were generally established through legislation or regulation. These goals have a mix of underlying purposes, including: (1) raising the level of nutrition among low-income households, (2) safeguarding the health and wellbeing of the nation’s children, (3) improving the health of Americans, and (4) strengthening the agricultural economy. (See appendix III for a summary of program goals.) While few have specific goals to reduce or alleviate hunger, most of these programs share an overarching goal of providing individuals access to a nutritionally adequate diet to ensure the health of vulnerable Americans. In addition, the current administration set a national goal to end childhood hunger in the United States by 2015 and the American Recovery and Reinvestment Act of 2009 (Recovery Act) expanded eligibility guidelines and increased benefits for SNAP, which may help the administration reach that goal. The prevalence of food insecurity (the percentage of households with low or very low food security) hovered between 10 and 12 percent from 1998 to 2007, before rising to 14.6 percent in 2008, according to USDA’s analysis of CPS data. Following a similar pattern, very low food security stayed between 3 and just more than 4 percent from 1998 to 2007, and reached 5.7 percent in 2008. (See figure 1.) USDA recently reported that about 17 million households in the United States (or 14.6 percent of all U.S. households) were food insecure at some point in 2008. Of these food-insecure households, USDA reported that 6.7 million (or 5.7 percent of all U.S. households) had very low food security. (See figure 2.) This increase in food insecurity coincided with the recent economic recession, which began in late 2007 and continued throughout 2008. Among households with incomes below the poverty line, those headed by single parents, and those headed by minorities, prevalence rates for food insecurity were higher than the national average rate of 14.6 percent. (See figure 3.) According to USDA’s analysis of the food security data, about 42 percent of households with incomes below the poverty line we food insecure in 2008. High levels of food insecurity were also found among single-parent households with children; for example, about 37 percent of households with children headed by single women were food re insecure, and about 28 percent of households with children headed by single men were food insecure. In contrast, among married couples with rried couples with children, 14.3 percent of households were food insecure. High levels of children, 14.3 percent of households were food insecure. High levels of food insecurity were also found among households headed by minorities food insecurity were also found among households headed by minorities for example, among households headed by Hispanics, nearly 27 percent for example, among households headed by Hispanics, nearly 27 percent were food insecure. were food insecure. Regardless of adults’ marital status, the prevalence of food insecurity was almost twice as high among households with children (21 percent) as among households without children (11.3 percent). In many families—just under half of the roughly 8.3 million food-insecure households with children—parents were able to maintain normal or near-normal diets and meal schedules for their children, limiting the effects of food insecurity to only the adults. However, in more than 4.3 million of these households, children—as well as adults—experienced food insecurity sometime during the year. Among households where children experienced food insecurity, most indicated low (but not very low) food security among children, reporting mainly reductions in the quality and variety of children’s meals. Of the households with children just more than 1 percent (about 506,000 households) had very low food security among children—food insecurity that was so severe that children’s eating patterns were disrupted and food intake was reduced below levels that caregivers considered sufficient. The federal government spent approximately $62.7 billion on 18 domestic food and nutrition assistance programs in fiscal year 2008, with the 5 largest programs accounting for 95 percent of total spending. Programs’ spending amounts ranged from approximately $4 million on the Community Food Projects Competitive Grants Program to more than $37 billion on SNAP. (See table 3.) Spending on food assistance programs is often determined by both the value of the benefits and the number of program participants. In 2008, for example, approximately 28.4 million people (12.7 million households) participated in SNAP per month, with each individual receiving an average of about $101.50 per month. In contrast, approximately 2.2 million individuals participated in the WIC Farmers’ Market Nutrition Program, with each participant receiving a benefit between $10 and $30 for the year. In fiscal year 2008, the five largest food assistance programs—SNAP, the National School Lunch Program, WIC, the Child and Adult Care Food Program, and the School Breakfast Program—accounted for 95 percent of total spending on the 18 programs. SNAP, the largest program, accounted for more than 60 percent of the overall spending total. (See figure 4.) Compared to the other 13 programs, the largest five food assistance programs have relatively high numbers of participants, and all but WIC are entitlement programs—meaning that, by law, they must provide benefits to all individuals or households that meet eligibility requirements and apply for the program. This means that participation and benefits for these programs are not capped, unlike programs that are appropriated specific spending amounts, such as the Commodity Supplemental Food Program or the Elderly Nutrition Program. Since 1995 SNAP spending has fluctuated, while spending on the other large programs—the National School Lunch Program, WIC, the School Breakfast Program, and the Child and Adult Care Food Program— remained relatively stable. Between 1995 and 2000 the amount the federal government spent on SNAP declined by 37.4 percent from $34.9 billion to $21.8 billion. However, between fiscal years 2001 and 2007, SNAP spending rose to $34.5 billion, nearly matching its previous 1995 level. In fiscal year 2008, spending on SNAP totaled $37.6 billion—a sharp increase of 9 percent in one year. In contrast, spending on the other large programs was relatively stable from 1995 through 2000, and most increased slightly between 2001 and 2008; however, WIC had an increase of 11 percent between 2007 and 2008. Overall, when adjusted for inflation the federal government spent 14 percent more on the largest five programs in fiscal year 2008 than it did on those five programs in fiscal year 1995. (See figure 5.) figure 5.) Economic conditions—such as unemployment or poverty—affect spending on food assistance programs. Because the five largest programs serve all or nearly all eligible individuals who apply, increases in poverty that occur during economic downturns can lead to increases in program participation, and consequently, increases in program spending. Of the five large programs, SNAP, which serves the largest population, is particularly responsive to economic changes. For example, changes in SNAP spending between 1995 and 2008 generally tracked the percentage of people who were unemployed, and spending changes were significantly correlated with the percentage of people living in poverty during those times (see figure 6). Consequently, the recent economic recession contributed to the demand for and spending on SNAP. USDA reported that SNAP participation nationwide increased in almost every month between December 2007, when the recession began, and September 2009, the last month for which information is available. Between June 2008 and 2009, SNAP participation increased by just over 22 percent nationwide. Spending on SNAP during the same time period increased by nearly 49 percent, due in part to increases in both participation and benefit rates. Congress anticipates a continued expansion in SNAP spending: USDA’s 2010 appropriation includes approximately $58.3 billion for SNAP, a 55 percent increase compared to fiscal year 2008 spending. State officials and local providers we spoke with also reported significant increases in the demand for federal food assistance during challenging economic conditions, and some found the recent influx of federal funds crucial in meeting that demand. Oregon state officials told us in June 2009 that SNAP applications statewide had increased by more than 40 percent during the previous year. Also, food bank officials in Texas told us that in June 2008 demand for services at their member food banks—supported by The Emergency Food Assistance Program—increased by 30 percent in the previous year. According to these officials, the additional funding that the program received from the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill) and the Recovery Act was critical in keeping up with this demand. The Recovery Act alone provided more than $21 billion for food assistance programs. These funds included a USDA estimated $20.1 billion for SNAP, in the form of increased benefits and state administrative expenses; $500 million for WIC; $100 million for equipment assistance for child nutrition programs; $150 million for The Emergency Food Assistance Program; $100 million for the Emergency Food and Shelter National Board Program; and $100 million for the Elderly Nutrition Program and Grants to American Indian, Alaska Native, and Native Hawaiian Organizations for Nutrition and Supportive Services. In addition to economic conditions, other factors—such as natural disasters, food costs, and outreach—can affect changes in program spending over time. According to USDA, SNAP showed an increase in spending in the fall of 2005 because of the additional assistance this program provided to hurricane victims mostly in the Gulf Coast states. Similarly, USDA attributed some of the increase in SNAP participation during 2008 to the effects of Hurricane Gustav. Rising food costs can be another driver of increased spending on some federal food assistance programs, particularly for the WIC program, which provides specific foods to women and their infants and young children. USDA’s Economic Research Service reported an increase of 12 percent in per person food costs for WIC between fiscal years 2007 and 2008, noting rising food costs as a major factor in increased WIC spending during that time. SNAP and the National School Lunch Program also make periodic adjustments in their benefit or reimbursement amounts based on the cost of food. Also, federal efforts beginning in 2001 to expand the proportion of eligible households participating in SNAP likely contributed to increases in participation and spending. These efforts included simplifying state SNAP eligibility and application processes and improving access to SNAP for eligible applicants. Research suggests that participation in seven of the programs we reviewed, including four of the five largest—WIC, the National School Lunch Program, the School Breakfast Program, and SNAP—is associated with positive health and nutrition outcomes consistent with most of these programs’ goals, including raising the level of nutrition among low-income households, safeguarding the health and wellbeing of the nation’s children, improving the health of Americans, and strengthening the agricultural economy (see appendix III for summary of program goals). WIC. Research generally suggests that participation in the WIC program is associated with positive outcomes related to all three of its program goals. For example, studies indicate that WIC has had several positive effects related to its goal of improving the mental and physical health of low-income pregnant, postpartum, and breastfeeding women, infants, and young children. Specifically, research suggests that WIC has some positive effects on individual dietary and nutrient intake, mean birth weight, general health status of infants and children, and the likelihood that children will receive complete and timely immunization, among other outcomes. One study also found that WIC participation was associated with reduced rates of child abuse and neglect. With regard to WIC’s goal of preventing the occurrence of health problems and improving the health status of the target population, some research suggests that WIC reduces anemia and other nutritional deficiencies, improves the diet quality and food use of households, and may even slightly increase the rates at which pregnant women quit smoking. Research on some of the other outcomes related to WIC’s goals is less conclusive. For example, findings are mixed on whether participation in the program increases the initiation or duration of breastfeeding or improves cognitive development and behavior of participants—outcomes that are related to WIC’s goals of improving the mental and physical health of recipients and preventing the occurrence of health problems and improving the health status of recipients. The National School Lunch and School Breakfast programs. Research suggests that both the National School Lunch and the School Breakfast programs have had some positive effects on health and nutrition outcomes related to their goals of (1) safeguarding the health and wellbeing of children and (2) encouraging the domestic consumption of agricultural and other foods. Related to the goal of safeguarding the health and wellbeing of children, research shows that both programs increase the dietary and nutrient intakes of participating students. For example, research finds that the School Breakfast Program improves students’ scores on a Healthy Eating Index and reduces the probability that students will have low fiber, iron, and potassium intake and low serum levels of vitamins C and E and folate. Also, research suggests that the National School Lunch Program increases the frequency of eating lunch among participants. However, research produced conflicting results on the School Breakfast Program’s effects on other outcomes related to this goal, such as whether the program increases the frequency that students eat breakfast. An evaluation of the School Breakfast Pilot Program, which unlike the traditional School Breakfast Program, provided universal free meals, found no effect on general measures of health or cognitive development. The same study examining the School Breakfast Pilot Program found that the program had a small negative effect on student behavior (as rated by teachers). Similarly, there is conflicting and inconclusive evidence on the National School Lunch Program’s effects on other outcomes related to the goal of safeguarding the health and wellbeing of children, such as childhood obesity. In addition, research finds that the National School Lunch Program has no effect on children’s cognitive development or behavior or iron status. Related to their other similar goal, some evidence suggests that the School Breakfast and the National School Lunch programs encourage the domestic consumption of agricultural and other foods. A 2003 report by USDA’s Economic Research Service found that through additional food consumption, school nutrition programs—of which the National School Lunch and School Breakfast programs are the largest—increased food expenditures by an additional $1.9 billion, increased farm production by just more than $1 billion, increased labor earnings and returns to farm ownership by $318 million, and supported approximately an additional 9,200 farm jobs. SNAP. Literature also suggests that participation in SNAP, the largest of the federal food and nutrition programs, is associated with positive effects on outcomes related to many of its goals. According to the research, participation in SNAP has several positive outcomes related to the program’s goals of raising the level of nutrition and increasing the food purchasing power of low-income households. For example, participation in SNAP has been found to increase household food expenditures, increase the availability of nutrients to the household, and, as some research has found, reduce anemia and other nutritional deficiencies. In addition, increasing household food expenditures is also related to SNAP’s goal of strengthening the U.S. agricultural economy. However, the literature is inconclusive regarding whether SNAP alleviates hunger and malnutrition in low-income households, another program goal. While studies show the program increases household food expenditures and the nutrients available to the household, research finds little or no effect on the dietary or nutrient intake of individuals. The Economic Research Service cites several reasons why, despite increasing household nutrient availability, SNAP may not affect individual dietary and nutrient intakes. For example, all household members might not share equally in the consumption of additional nutrients made available by SNAP benefits, some food may be wasted or consumed by guests, and some household members might consume food from other “nonhome” sources. In addition, the availability of more food in the house does not guarantee individuals eat a healthier diet. Additional programs. The literature also suggests that participation in three of the smaller programs—the Elderly Nutrition Program: Home Delivered and Congregate Nutrition Services; Nutrition Assistance for Puerto Rico; and the Special Milk Program—is associated with positive outcomes related to their program goals. The research on the Elderly Nutrition Program: Home Delivered and Congregate Nutrition Services directly addresses two of the program’s goals. Studies found that the program increases socialization and may have a positive effect on food security. In addition, research suggests the program improves participants’ dietary and nutrient intake—an outcome related to the program’s goal of promoting the health and wellbeing of older individuals by assisting such individuals to gain access to nutrition and other disease prevention and health promotion services to delay the onset of adverse health conditions resulting from poor nutritional health or sedentary behavior. However, the research does not provide enough evidence to assess the program’s effects on other goal-related outcomes, such as nutritional status. Research on the Nutrition Assistance for Puerto Rico and the Special Milk Program is somewhat limited and dated. However, studies on Nutrition Assistance for Puerto Rico suggests that participation in the program increases household access to a variety of nutrients—an outcome related to its goal of funding nutrition assistance programs for needy people. Research also shows that participation in the Special Milk Program has positive effects, including increasing children’s intake of vitamins and minerals found in milk. In addition to the programs’ individual goals, USDA has a broad outcome measure to reduce and prevent hunger by improving access to federal nutrition programs but studies show that programs’ effectiveness in achieving this outcome are mixed. Some research found that the National School Lunch Program has a positive effect on the food security status of families with children who participate in the program. For example, one study found that for households with children that experienced hunger during the previous year, those that participated in the National School Lunch Program were more likely to be food secure during the month before they were surveyed than those that didn’t participate. Some studies also found that SNAP positively impacts food security. A recent paper released by USDA’s Economic Research Service found that households’ food security deteriorated during the seven to eight months before entering SNAP and improved after the households’ began receiving SNAP benefits, suggesting that SNAP reduced the prevalence of very low food security. A second study found that while simply participating in SNAP did not reduce the odds of being food insecure, the level of benefits received did—every additional $10 in SNAP benefits was associated with a 12 percent reduction in the odds of a household being food insecure. However, other research findings differ on whether SNAP and other programs increase food security. For example, one study found that food security more often worsened than improved for households that began receiving SNAP benefits in 2001 and 2002 and conversely, as households left the program, their food security status more often improved than worsened. Similarly, research is not conclusive regarding WIC’s success in increasing food security for participants, and research did not produce clear results on whether the School Breakfast Program improved participants’ food security. According to USDA and academic researchers, there are several reasons why participation in food assistance programs may not be clearly associated with improvements in food security. While some programs focus more on improving or safeguarding the health of participants, the approaches used by these programs may not be as effective in reducing food insecurity. For example, the WIC program provides a relatively small, but highly targeted, food package consisting of high nutrient foods to address common nutritional deficiencies, an approach that may have only a small impact on the food security of recipients. Other programs may improve food security, but their impact may be difficult to measure because economic trends—such as changes in poverty and unemployment rates and changes in other assistance received by households—also affect food security. In addition, those who choose to participate in food assistance programs generally have greater difficulty meeting their food needs and tend to be more food insecure compared to others that are eligible for programs but do not participate. Little is known about the effectiveness of the remaining 11 programs because they have not been well studied. We found only one study that measured the impact of the Summer Food Services Program on outcomes related to its goals. Similarly, only one study of the Child and Adult Care Food Program compared facilities that participate in the program with those that did not. While these studies had generally positive results, more research would be needed to draw conclusions about the outcomes of the programs they studied. For other programs, no academic literature was identified that addressed outcomes related to their goals. For example, only one study we reviewed evaluated the effects of the Commodity Supplemental Food Program, but the findings from this study were not directly related to the program’s goal of providing food to help meet the nutritional needs of the target population. Table 4 summarizes the level of research we found on each program. One government evaluation—the Program Assessment Rating Tool (PART) developed by the Office of Management and Budget—provides some additional information on the effectiveness of 7 of the 11 less studied programs. Four of these seven programs—the WIC Farmers’ Market Nutrition Program, the Seniors Farmers’ Market Nutrition Program, The Emergency Food Assistance Program, and the Commodity Supplemental Food Program—received ratings of “results not demonstrated.” The Summer Food Service Program was rated as “moderately effective.” Both the Child and Adult Care Food Program and the Food Distribution Program on Indian Reservations received ratings of “adequate.” The other four programs for which limited academic research was identified have not been evaluated. (See table 5.) It is important to note that PART rates programs on their purpose and design, strategic planning, program management, and program results and accountability rather than looking at specific outcomes as the academic literature generally does. Therefore, PART’s ratings do not provide the same type of assessment of program effectiveness as, and are not directly comparable to, the findings from academic research. Additionally, agency data show that the 11 less-studied programs provide food and nutrition assistance to millions of individuals and households each year—an outcome related to their goals—however, this alone does not demonstrate the overall effectiveness of these programs. One of the goals of the Summer Food Service Program is to provide food to children from needy areas during periods when schools are closed. USDA data show that this program served an average of more than 2.1 million children a day during July of 2008 and provided almost 130 million meals to children during the course of that fiscal year. In addition, a goal of the Child and Adult Care Food Program is to enable nonresidential institutions to provide nutritious food service to program participants. According to USDA, approximately 3.1 million children received free meals or snacks each day in fiscal year 2008 in child care centers or day care homes through this program. Smaller programs also provide benefits to millions of individuals and households. For example, in fiscal year 2008, the WIC Farmers’ Market Nutrition Program provided coupons to assist about 2.2 million participants purchase fresh produce—an outcome related to the program’s goal of providing fresh nutritious unprepared foods from farmers’ markets to women, infants, and children at nutritional risk. In that same year, The Emergency Food Assistance Program distributed approximately 337 million pounds of food to hunger relief organizations, such as food banks and soup kitchens, and the Federal Emergency Management Agency’s Emergency Food and Shelter National Board Program served more than 73 million meals to needy individuals and families. Both of these programs have goals related to providing food assistance to needy individuals through eligible organizations. Although these programs provide food to their target populations, this alone is too little information to assess the overall effectiveness of these programs. Federal food assistance is provided through a decentralized system that involves multiple federal, state, and local providers and covers 18 different programs. Three federal agencies, numerous state government agencies, as well as many different types of local providers—including county government agencies and private nonprofit organizations—play a role in providing federal food assistance, but the decentralized network of federal, state, and local entities can be complex. Figure 8 illustrates how the federal food assistance programs are administered through a decentralized network of state offices and local providers in Texas—an organizational structure we found less complicated than some of the other states we visited. The federal response to food insecurity and the decentralized network of programs developed to address it emerged piecemeal over many decades to meet a variety of needs. For example, according to the USDA, an early food stamp program created during the Great Depression was designed to help relieve agricultural surpluses by providing food to needy individuals and households. This early food stamp program, like SNAP, was generally available to most needy households with limited income and assets and not targeted to a specific subgroup, but also like SNAP, it was not intended to meet a household’s full nutritional needs. Over time, when it became evident that despite the availability of food stamps, certain vulnerable populations continued to experience nutritional risk, additional programs were developed to meet those needs. The origin of WIC, for example, dates back to the 1960s when a White House Conference on Food, Nutrition, and Health recommended that special attention be given to the nutritional needs of low-income pregnant women and preschool children based on the premise that early nutrition intervention can improve the health of children and prevent health problems later in life. The Emergency Food Assistance Program—authorized in 1983—was created to utilize excess federal food inventories and assist states with storage costs while assisting the needy, while the Emergency Food and Shelter National Board Program—administered by Federal Emergency Management Agency—was established in the 1980s to provide assistance to the homeless. By targeting various needs, the 18 food assistance programs help increase access to food for vulnerable populations, according to several agency officials and local providers we spoke with. Some officials and providers told us that individuals in need of food assistance have different comfort levels with different types of assistance and delivery mechanisms and the diversity of food assistance programs can help ensure that low-income individuals and households who need assistance have access to at least one program. For example, some individuals in need of assistance prefer to pick up a bag of groceries from a food bank rather than having to complete the application and eligibility procedures necessary to receive SNAP benefits. Others, such as those in rural areas, may find it easier to receive food assistance through commodities from the Commodity Supplemental Food Program or other programs, as a lack of local grocery stores can make it difficult to use SNAP benefits. Several officials said that the availability of multiple programs provided at different locations within a community can also increase the likelihood that eligible individuals seeking benefits from one program will be referred to other appropriate programs. In addition, several officials and providers told us that since no one program alone is intended to meet a household’s full nutritional needs, the variety of food assistance programs offers eligible individuals and households different types of assistance and can help households fill the gaps and address the specific needs of individual members. For example, a single parent with a low-paying job may rely on SNAP for her basic groceries, the National School Lunch Program to feed her child at school, and WIC to provide high-nutrient supplemental foods for herself and her infant. While the federal government’s food assistance structure allows households to receive assistance from more than one program at a time, USDA data indicate that a small portion of food insecure households received assistance from more than one of the primary food assistance programs. According to USDA, of the food insecure, low-income households, only about 3 percent participated in all of the three largest programs—SNAP, the National School Lunch Program, and WIC. Additionally, 12 percent participated in both SNAP and the National School Lunch program, about 15 percent participated in only SNAP, and another 15 percent participated in only the National School Lunch Program (see figure 7). USDA reported that some food insecure households also received other types of food assistance, such as through food pantries and soup kitchens. The federal food assistance structure—with its 18 programs—shows signs of program overlap, which can create unnecessary work and waste administrative resources, creating inefficiency. Program overlap occurs when multiple programs have comparable benefits going to similar target populations—not uncommon within programs that are administered by multiple agencies and local providers. GAO’s previous work has shown that overlap among programs can create an environment in which participants are not served as efficiently and effectively as possible. Additionally, program overlap can create the potential for unnecessary duplication of efforts for administering agencies, local providers, and individuals seeking assistance. Such duplication can waste administrative resources and confuse those seeking services. During our site visits, we found ways in which overlap among the 18 food assistance programs may be creating unnecessary work for providers and applicants and may be using more administrative resources than needed. The following examples came from selected states and the degree of overlap across programs may vary from state to state. However, the scope of this report did not allow us to gather enough information to discuss the level of overlap or extent of administrative efficiencies among food assistance programs on a national level. Some programs provide comparable benefits to similar population and are managed separately—a potentially inefficient use of federal funds. While the programs in this study do not exactly duplicate each others’ services, some provide comparable benefits to similar target populations—this may be in part because they were created separately to meet various needs. For example, six programs—the National School Lunch Program, the School Breakfast Program, the Fresh Fruit and Vegetable Program, the Summer Food Service Program, the Special Milk Program, and the Child and Adult Care Food Program—all provide food to eligible children in settings outside the home, such as at school, day care, or summer day camps. Also, the Commodity Supplemental Food Program provides food to the elderly and to women, infants, and children up to age six. These populations are targeted by other programs as well. The Elderly Nutrition Program primarily serves individuals 60 years and older and WIC serves pregnant and postpartum women and children up to age five. In addition, individuals eligible for groceries through the Commodity Supplemental Food Program are generally eligible for groceries through The Emergency Food Assistance Program and for SNAP. The Federal Emergency Management Agency’s Emergency Food and Shelter National Board Program and USDA’s Emergency Food Assistance Program both provide groceries and prepared meals to needy individuals through local government and nonprofit entities. As another example, the Summer Food Service Program has similarities to the Summer Seamless Option of the National School Lunch Program. However, the two programs have different reporting requirements and reimbursement rates and, as an official explained, this difference made his school choose between the Summer Food Service Program’s higher reimbursement rate and the Seamless Summer Option’s fewer reporting requirements. GAO has found that program overlap—having multiple programs provide comparable benefits to similar target populations—is an inefficient use of federal funds. Like other social service programs, most food assistance programs have specific and often complex administrative procedures that federal, state, and local organizations follow to help manage each program’s resources and provide assistance. Government agencies and local organizations dedicate staff time and resources to separately manage the programs even when a number of the programs are providing comparable benefits to similar groups and could potentially be consolidated. Previous GAO work indicates that combining programs could reduce administrative expenses by eliminating duplicative efforts, such as eligibility determination and data reporting. However, some officials and providers express concern that such consolidation would make it more difficult to serve people in need and easier to reduce funds specifically dedicated to providing food assistance. Consolidating to improve program efficiency presents other tradeoffs as well. Most of the 18 programs, including the small programs, were designed to target assistance to specific populations or meet the specific needs of certain populations. Efforts to reduce overlap could detract from the goals of some of the programs. For example, programs focused on improving the nutritional status of participants may use a different approach than programs focused on reducing food insecurity even if both programs are available to the same or similar target groups, and efforts to reduce overlap could make it difficult to achieve both goals. Overlapping eligibility requirements create duplicative work for providers and applicants. According to previous GAO work and the officials we spoke with, overlapping program rules related to determining eligibility often require local providers to collect similar information— such as an applicant’s income and household size—multiple times because this information is difficult to share, partly due to concerns for safeguarding individuals’ confidentiality but also due to incompatible data systems across programs. In addition, some of these rules often require applicants who seek assistance from multiple programs to submit separate applications for each program and provide similar information verifying, for example, household income. Some local providers and state officials told us families with the greatest needs often access multiple programs in an attempt to ensure they have enough food to eat. The application process is made even more challenging for families when the programs are physically housed in a wide range of government agencies or nonprofit organizations within the community. USDA has taken steps to address some of these inefficiencies. To align eligibility procedures and encourage participation, especially among its largest programs, USDA has policies in place that often make it simpler for recipients of one program to receive benefits in another. For example, evidence of SNAP participation is one way for a mother to show that her income is low enough to qualify for WIC. USDA also has instituted direct certification for its child nutrition programs, including the National School Lunch Program, the School Breakfast Program, and the Child and Adult Care Food Program. Direct certification allows state SNAP offices to share their local enrollment lists with school districts so that children in households receiving SNAP can automatically be determined eligible to receive free school meals without having to complete a separate application. Education officials we talked with who have established direct certification with SNAP believe that it reduces work for both the school districts and the families. However, the process to directly certify eligible school-aged children is not always effective. USDA has estimated that 10 million children were eligible for direct certification at the start of the 2008-2009 school year, but only 6.5 million were directly certified. Consequently, the families of approximately 2 million children completed and submitted two similar applications: one for SNAP and one for free or reduced-priced school meals. Further, as many as 1.5 million children may not be receiving free school meals because they were not automatically enrolled through direct certification and their parents or guardians did not apply. USDA has also taken steps to coordinate programs—including those related to nutrition education—within the Food and Nutrition Service as well as across state agencies and local providers. In 2003 USDA initiated State Nutrition Action Plans in part to advance cross-program integration among the nutrition education component of the federal food assistance programs at the state level. Through this process, state teams identify a common goal and formulate a plan for working together across programs to achieve that goal. In 2004, soon after USDA initiated efforts to integrate its nutrition education programs, GAO reviewed USDA’s nutrition education programs and identified challenges related to program overlap. For example, GAO found that while nutrition educations programs share similar target populations and nutrition education goals, they lacked strong coordination, which can result in, among other things, inefficient use of resources. In addition, GAO found that the programs’ different administrative structures hindered coordination among nutrition education efforts. In response to this 2004 report, USDA made a number of efforts to improve coordination among its nutrition education programs and strength linkages among them. For example, USDA established Nutrition.gov, a Web site which provides a variety of information on nutrition education and describes USDA’s food assistance programs. USDA has also taken a number of steps to systematically collect reliable data and identify and disseminate lessons-learned for its nutrition education efforts. Another example of USDA’s efforts to increase coordination across program services is by permitting their regional offices to retain a small percentage of WIC funds—also known as WIC operational adjustment funds—to support regional priorities including, for example, coordinating food assistance programs at the state and local levels. One such coordination effort at the local level made possible through WIC operational adjustment funds, was in Alameda County, California, where a group of local providers meets regularly to discuss ways to coordinate their food assistance programs. Among this group’s accomplishments is a pamphlet that provides information in both English and Spanish on how to access services through the several federal food assistance programs in their community, such as SNAP benefits, WIC services, school meals for children, and emergency food services offered through the local food bank (see figure 9). This group is also actively pursuing funding for piloting a universal application so that individuals interested in applying for multiple food assistance programs can complete one application instead of several. During our visits to rural areas of California and Maryland, we learned that local coordination efforts were less structured and based more on personal connections among program officials and between service providers. Throughout our site visits, some state officials and local providers told us they would like to see the federal government do more to coordinate its food and nutrition assistance programs. For example, a director of a nongovernmental organization (NGO) that provides food assistance through the Elderly Nutrition Program and the Emergency Food and Shelter National Board Program explained that he is not always clear about what federal food assistance programs are available to NGOs or which ones are best suited for his organization’s mission and resources. The NGO director suggested that federal agencies work together to build a Web site that identifies the various food assistance programs and provides information—such as programs’ eligibility, administrative, and funding requirements—to help local providers determine if their NGOs have the right type of mission and sufficient personnel and funding to provide assistance funded by certain federal programs. According to this local provider, having consolidated information on all the food assistance programs would help organizations determine what federal food assistance program best matches their mission and resource capacity. The federal government spends billions of dollars every year to support a food assistance structure that, while critical to addressing some of the most basic needs facing the nation’s most vulnerable individuals, shows signs of potential overlap and inefficiency among its programs. With the growing rate of food insecurity among U.S. households and significant pressures on the federal budget, it is important to understand not only the extent to which food assistance programs complement one another to better meet program goals but also the extent to which program services and administrative requirements may overlap and create duplication that adversely impacts program effectiveness and efficiency. While research indicates that the largest programs have positive outcomes consistent with their program goals, limited research on most of the smaller programs makes it difficult to determine whether these are filling an important gap or whether they are unnecessarily duplicating functions and services of other programs. It is only by looking more closely at the goals, benefits, and target populations of the many smaller programs that the federal government can begin to develop methods to help reduce inefficiencies and save administrative resources while at the same time ensuring that those who are eligible receive the assistance they need. Furthermore, for the programs that have complementary goals, functions, and services, there may be ways to more efficiently fulfill administrative requirements and processes. Small changes to increase administrative efficiencies, such as additional efforts to align application procedures, could be made in the near-term; however, larger changes involving program duplication will require careful attention to the potential effects on those currently receiving assistance. Without such efforts, resources may be wasted or those in need may not be able to access enough food for a healthy, productive life. We recommend that the Secretary of Agriculture, as the principal administrator of the federal government’s food assistance programs, identify and develop methods for addressing potential inefficiencies among food assistance programs and reducing unnecessary overlap among the smaller programs while ensuring that those who are eligible receive the assistance they need. Approaches may include conducting a study; convening a group of experts (consistent with the Federal Advisory Committee Act), including, for example, representatives of the 18 food assistance programs, state representatives, and local providers; considering which of the lesser-studied programs need further research; or piloting proposed changes. Recommendations from further study could be used by administering agencies or, if appropriate, by Congress to improve the federal government’s food assistance system. We shared a draft of this report with USDA, HHS, and DHS for review and comment. The following summarizes the response from each agency. On March 10, 2010, USDA provided informal comments via e-mail. USDA stated that our analysis was thoughtful and objective. However, the agency expressed concern that our discussion of the overlap and duplication of nutrition assistance programs in the body of the report may be overlooked by readers who focus on the summary and conclusion. USDA emphasized that no single nutrition assistance program is designed to meet all of a family’s nutrition needs, and that participation in one or more of the largest nutrition assistance programs does not guarantee food security. Additionally, while they may appear similar in terms of the general demographic characteristics of their target populations, USDA noted that programs vary with respect to how well they fit the needs of different subgroups and no single program attracts or serves everyone in its respective target audience. For example, some individuals—like the homeless or elderly—may find it difficult to prepare their own meals and instead need already prepared meals, such as those provided by The Emergency Food Assistance Program and the Child and Adult Care Food Program. The agency also emphasized that fundamental change to improve program efficiency requires legislation that facilitates program integration. USDA concluded by stating that it will consider the value of a study to examine potential inefficiencies and overlap among smaller programs. However, the agency explained that it has generally focused research efforts on large programs as the most cost-effective use of the limited dollars available. We should note that our recommendation includes the need to address unnecessary overlap and duplication among smaller programs but also refers to the need to identify and develop methods for addressing potential inefficiencies among food assistance programs overall, which would include the larger programs that have complementary goals but often have separate administrative systems and eligibility requirements. USDA also expressed concern that in the absence of a specific appropriation for such a study, any allocation of resources to this effort would shift resources away from other projects and priorities. We believe that conducting study is one possible method for addressing potential inefficiencies and reducing overlap among smaller programs. Other approaches—such as convening a group of experts—may be as effective and require fewer resources. HHS agreed with the report’s finding that the Elderly Nutrition programs directly address program goals. In addition, HHS agreed that federal programs should aim to achieve the greatest efficiency, effectiveness, and reduction of duplication and overlap. The agency stated its view that the Older Americans Act Nutrition Services programs complement, not duplicate, USDA’s food and nutrition assistance programs. HHS’s written comments appear in appendix IV. DHS Federal Emergency Management Agency provided technical comments, most of which provided clarification and were incorporated in the report where appropriate. We are sending copies of this report to relevant congressional committees; the Secretaries of Agriculture, Health and Human Services, and Homeland Security; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. In selecting programs for this review, we defined the scope to include only federal programs that focus primarily on providing or supporting food and nutrition assistance in the United States. We identified these programs by reviewing the Catalog of Federal Domestic Assistance (CFDA), relevant federal laws and regulations, and relevant documents. We also met with federal officials and relevant experts. Using key words related to food and nutrition assistance and other social services, we conducted a systematic search in the CFDA to identify programs that have some role in providing food and nutrition assistance and the respective agencies responsible for administering each of these programs. We also interviewed federal officials and reviewed agencies’ Web sites. In addition, we reviewed related federal legislation—such as the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill) and Child Nutrition and WIC Reauthorization Act of 2004—to search for new grant programs or pilot projects that provide or support food and nutrition assistance. From this search, we identified 70 potential food and nutrition-related programs. Using our initial collection of 70 programs, we limited the list to programs that (1) mentioned food or nutrition assistance in their CFDA profile or on the agency’s Web site or (2) allowed funds to be used to build the infrastructure within or the coordination across food and nutrition assistance programs. We then excluded any programs that met one or more of the following: Food and nutrition assistance is not the primary objective of the program, but is one of multiple social support services. Program did not exist or was not funded in fiscal year 2008. Programs provide fungible funds to states or individuals that may be used for, but are not required to be spent on, the purchase of food. Program supports infrastructure costs that support a range of programs or a facility, which can include, but are not limited to, food and nutrition assistance-related functions. Dedicated funding stream that supports a program or a component of a food assistance program already included in our review. For example, the Nutrition Services Incentive Program (NSIP) provides funds and commodities to support two Department of Health and Human Services (HHS) programs: the Elderly Nutrition Program: Home-Delivered and Congregate Nutrition Services, and Grants to American Indian, Alaska Native, and Native Hawaiian Organizations for Nutrition and Supportive Services; therefore, we did not consider NSIP as a separate program in this review). Federal efforts that process or deliver food to organizations that administer food and nutrition assistance programs, such as the food distribution and price support functions of the U.S. Department of Agriculture’s (USDA) Farm Service Agency. Program funds that are directed toward research or nutritional education or outreach only. We excluded programs that focus solely on nutrition education because of previous GAO work in this area. Examples of nutrition education programs include Team Nutrition Initiative and Expanded Food and Nutrition Education Program. Other programs that have nutrition education components but primarily provide food assistance—such as the Supplemental Nutrition Assistance Program (SNAP), Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), the National School Lunch Program, and the Child and Adult Care Food Program—are included in this review. Once initial program determinations were made, we sent e-mails to the agencies that had only programs excluded from our program list. These agencies included the Corporation for National and Community Service (CNCS), Department of Defense (DOD), Department of Housing and Urban Development (HUD), Department of Veterans Affairs (VA), and the following offices within HHS: Administration for Children and Families; Health, Resources, and Services Administration; Indian Health Services; Centers for Medicare and Medicaid Services (CMS); and Centers for Disease Control and Prevention (CDC). All liaisons confirmed our exclusion decisions, with the exception of officials from CMS. For the agencies with programs that met our inclusion criteria, we held follow-up meetings or corresponded with agency liaisons from three agencies—USDA, HHS, and the Department of Homeland Security (DHS)—to confirm or offer feedback on our decisions. This process resulted in the 18 programs included in our engagement. See table 6 for a full list of included and excluded programs. To show the prevalence of food insecurity among U.S. households from 1995 to 2008, we presented data from the Current Population Survey (CPS), a nationally representative survey with comparable measures across years. Food insecurity is measured each year by the USDA Economic Research Service using the Food Security Supplement of the CPS. The survey asks individuals 10 questions (18 questions are asked if the household contains children 18 years of age or younger) about behaviors or conditions known to characterize households having difficulty meeting basic food needs. The answers to the survey questions determine the food security status of each household and, collectively, allow USDA to monitor and track changes in food insecurity among U.S. households. The food insecurity prevalence rates are sample-based estimates. All food security rates presented in this report are statistically significant (different than zero) at the 90 percent confidence level, and rates for different subpopulations are presented only where there are statistically significant differences between these populations. More information on the confidence intervals around the food insecurity estimates is presented in appendix II. While the food security data have some limitations, we consider these data reliable and appropriate for this engagement. See appendix II for more information on the food security data. To determine how much money federal agencies spent on food and nutrition programs, we analyzed data from the Consolidated Federal Funds Report (CFFR)—a database that compiles expenditures or obligations from federal agencies. These data are not entirely consistent across programs. For example, USDA agency officials reported obligations, while the Administration on Aging reported the amounts in the CFFR are comparable to the amount of federal funds that states and tribes spent in fiscal year 2008 to support the agency’s nutrition assistance programs. Programs also differ in whether and how they report funds dedicated to administrative efforts to the CFFR. In addition, agency officials told us that some spending amounts were not included in the CFFR, and for those programs, we contacted agencies directly to obtain spending amounts. Once we compiled the spending amounts for each program, we contacted budget officials at each agency to confirm the amounts. In several cases, we combined the CFFR totals with additional spending information provided by agency officials, to ensure an accurate reporting of spending (see notes in table 2). After speaking with agency officials and interviewing a federal Census Bureau official with detailed knowledge of the CFFR database, we determined the data are reliable and appropriate for our engagement. In order to determine the number of individuals and households participating in USDA, HHS, and DHS food and nutrition assistance programs and the quantity of benefits distributed, the team relied on publicly available data from these agencies. Because these data are being used for background purposes only, we did not conduct a reliability assessment of these data. To determine what is known about the effects food and nutrition assistance programs have on outcomes related to their program goals, we began by compiling a list of program goals based on our review of federal statutes, regulations, or discussions with agency officials. We then used a large scale literature review conducted by the Economic Research Service of USDA and conducted our own, smaller-scale literature review of studies that addressed the impacts of food and nutrition assistance programs. The Economic Research Service literature review—Effects of Food Assistance and Nutrition Programs on Nutrition and Health—evaluated available research on the effectiveness of USDA food and nutrition assistance programs produced or published between 1973 and 2002. Our literature review was designed to capture research on USDA programs published between January 2002 and March 2009, as well as programs administered by HHS and DHS between January 1995 and March 2009. Our initial literature searches returned hundreds of studies. We then narrowed the results using criteria that included research that examined (1) program participation effects on nutrition or health related outcomes and (2) the effects of the programs on the agricultural economy, which contained a comparison between a participant and nonparticipant group or was longitudinal in nature. These criteria allowed us to reduce the number of potential studies to fewer than 125. From this list we selected a sample of 35 studies to review. To ensure our sample did not inadvertently omit any seminal research we consulted experts at USDA and HHS. Due to the limited available research on smaller programs, identified studies of these programs were automatically included. Each of the 35 studies chosen was systematically reviewed and information on the study’s design, methodology, limitations, and findings was compiled and analyzed. Of the 35 studies, we deemed five to be too methodologically flawed or limited for our purposes. Although the Economic Research Service literature review and the research selected for our literature review were considered to be methodologically sound, it is important to understand that certain limitations may prevent firm conclusions regarding the effects of the programs. For example, the data used in some of the studies is dated and programs may have changed substantially since the data was collected or the research was completed. In addition, some of the research examined pilot or demonstration projects and thus only provide suggestive evidence for actual program impacts. The samples used in some studies may prevent generalizing their findings to wider populations. Furthermore, selection bias is a concern in much of the literature as few randomized controlled experiments exist. Selection bias can occur for many reasons— for example, in voluntary programs, those who chose to participate (or stop participating) may be systematically different from those who chose not to participate and its consequence can be to make a program appear more (or less) effective than it actually is. With few exceptions, the academic literature related to programs’ effectiveness did not directly examine whether programs were meeting their legislative and program goals. Therefore, we were required to assess which program outcomes addressed in the literature were related to these goals. To do this, we first identified the goals of each program by reviewing relevant federal statues and regulations, as well as consulting agency officials. Second we reviewed the impacts addressed in the literature reviewed and assessed which program goals, if any, they were related to. We then assessed the relevance of each impact to each program goal. A GAO economist independently performed a similar assessment. Last the assessments were reconciled with the help of the methodologist assisting on the engagement. The methodology for our determinations regarding which program outcomes were related to which program goals was shared with agency officials who expressed no concerns about its validity. We visited California, Illinois, and Maryland. We also conducted phone interviews with officials and providers in Oregon and Texas. The states that we selected represent a combination of urban and rural demographics and geographic distribution. We also selected states and local areas based on recommendations from federal and state officials and relevant experts. The information we collected from our site visits helped inform our understanding of the complex issues related to food assistance. These site visits also helped us better understand the implications of providing food assistance through multiple programs and agencies. We conducted this performance audit from February 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions. Appendix III: Selected Program Goals Summary of selected program goals Assist states to initiate, maintain, and expand nonprofit food service programs for children or adults in nonresidential institutions which provide care. Enable nonresidential institutions to provide nutritious food service to participants. Improve the quality of meals or level of services provided or increase participation in the program at adult day care centers. Provide food to help meet the nutritional needs of the target population. Meet the food needs of low-income individuals. Increase the self-reliance of communities in providing for the food needs of the communities. Promote comprehensive responses to local food, farm, and nutrition issues. Meet specific state, local, or neighborhood food and agricultural needs, including needs relating to infrastructure improvement and development, planning for long-term solutions, or the creation of innovative marketing activities that mutually benefit agricultural producers and low-income consumers. Reduce hunger and food insecurity. Promote socialization of older individuals. Promote the health and well-being of older individuals by assisting such individuals to gain access to nutrition and other disease prevention and health promotion services to delay the onset of adverse health conditions resulting from poor nutritional health or sedentary behavior. Provide shelter, food, and supportive services to homeless individuals and to help them access other services. Provide funding to help create more effective and innovative local programs. Do minor rehabilitation to mass shelter and mass feeding facilities to make them safe, sanitary, and to bring them into compliance with local building codes. Provide emergency food and shelter to needy individuals through private organizations and local governments. Raise the level of nutrition among low-income households. Alleviate hunger and malnutrition in low-income households. Increase food purchasing power for eligible households. Strengthen the U.S. agricultural sector. More orderly marketing and distribution of food. Permit low-income households to obtain a more nutritious diet through normal channels of trade. Make fresh fruits and vegetables available in elementary schools. Promote the delivery of supportive services, including nutrition services to American Indians, Alaskan Natives, and Native Hawaiians. Safeguard the health and well-being of the nation’s children. Encourage the domestic consumption of nutritious agricultural commodities and other foods. Fund nutrition assistance programs for needy people. Summary of selected program goals Safeguard the health and well-being of the nation’s children. Encourage the domestic consumption of agricultural and other foods by assisting states to more effectively meet the nutritional needs of children. Assist the states and the Department of Defense to initiate, maintain, or expand nonprofit breakfast programs in all schools that apply for assistance and agree to carry out a nonprofit breakfast program. Provide fresh, nutritious, unprepared produce to low-income seniors from farmers’ markets and roadside stands, and community supported agriculture. Increase the consumption of agricultural commodities. Expand or aid the expansion of farmers’ markets, roadside stands, and community supported agriculture programs. Develop or aid in the development of new farmers’ markets, roadside stands, and community supported agriculture programs. Encourage consumption of fluid milk by U.S. children in nonprofit schools, high school grade and under, that don’t participate in federal meal service programs. Encourage consumption of fluid milk by U.S. children in nonprofit institutions devoted to the care and training of children, such as nursery schools and child care centers, that don’t participate in federal meal service programs. Safeguard the health and well-being of the nation’s children. Encourage the domestic consumption of agricultural and other foods by assisting states to more effectively meet the nutritional needs of children. Provide food service to children from needy areas during periods when area schools are closed for vacation. Assist states to initiate and maintain nonprofit food service programs for children in service institutions. Raise the level of nutrition among low-income households. Alleviate hunger and malnutrition in low-income households. Increase food purchasing power for eligible households. Strengthen the U.S. agricultural sector. More orderly marketing and distribution of food. Permit low-income households to obtain a more nutritious diet through normal channels of trade. Make maximum use of the nation’s agricultural abundance. Expand and improve the domestic distribution of price-supported commodities. Make excess agricultural commodities available without charge, for use by eligible recipient agencies for food assistance. Improve the mental and physical health of low-income pregnant, postpartum, and breastfeeding women, infants, and young children. Prevent the occurrence of health problems, including drug abuse, and improve the health status of the target population. Provide supplemental foods and nutrition education to target population. Provide fresh nutritious unprepared foods from farmers’ markets to women, infants, and children at nutritional risk. Increase awareness and use of farmers’ markets and sales at such markets. In addition to the contact named above, Kathryn Larin, Assistant Director; Cheri Harrington, Analyst-in-Charge; Jacques Arsenault; David Barish; Nancy Cosentino; Sara Edmondson; Alex Galuten; Charlene Johnson; Kirsten Lauber; Jean McSween; Mimi Nguyen; Susan Offutt; Jessica Orr; Rhiannon Patterson; Catherine Roark; Nyree Ryder Tee; Gregory Whitney; and Charles Willson made significant contributions to this report. School Meal Programs: Experiences of the States and Districts That Eliminated Reduced-price Fees. GAO-09-584. Washington, D.C.: July 17, 2009. Food Stamp Program: Options for Delivering Financial Incentives to Participants for Purchasing Targeted Foods. GAO-08-415. Washington, D.C.: July 30, 2008. Department of Agriculture, Food and Nutrition Service: Special Supplemental Nutrition Program for Women, Infants and Children (WIC): Revisions in the WIC Food Packages. GAO-08-358R. Washington, D.C.: December 17, 2007. Nutrition Education: USDA Provides Services through Multiple Programs, but Stronger Linkages among Efforts Are Needed. GAO-04-528. Washington, D.C.: April 27, 2004. Federal Food Safety and Security System: Fundamental Restructuring Is Needed to Address Fragmentation and Overlap. GAO-04-588T. Washington, D.C.: March 30, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. School Lunch Program: Efforts Needed to Improve Nutrition and Encourage Healthy Eating. GAO-03-506. Washington, D.C.: May 9, 2003. Fruits and Vegetables: Enhanced Federal Efforts to Increase Consumption Could Yield Health Benefits for Americans. GAO-02-657. Washington, D.C.: July 25, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Food Assistance: Research Provides Limited Information on the Effectiveness of Specific WIC Nutrition Services. GAO-01-442. Washington, D.C.: March 30, 2001. Food Assistance: Performance Measures for Assessing Three WIC Services. GAO-01-339. Washington, D.C.: February 28, 2001. Title III, Older Americans Act: Carryover Funds Are Not Creating a Serious Meal Service Problem Nationwide. GAO-01-211. Washington, D.C.: January 9, 2001. Food Assistance: Options for Improving Nutrition for Older Americans. GAO/RCED-00-238. Washington, D.C.: August 17, 2000. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Managing for Results: Barriers to Interagency Coordination. GAO/GGD-00-106. Washington, D.C.: March 29, 2000. Welfare Programs: Opportunities to Consolidate and Increase Program Efficiencies, GAO/HEHS-95-139. Washington, D.C.: May 31, 1995. Food Assistance Programs. GAO/RCED-95-115R. Washington, D.C.: February 28, 1995. Food Assistance: USDA’s Multiprogram Approach. GAO/RCED-94-33. Washington, D.C.: November 24, 1993. Early Intervention: Federal Investments Like WIC Can Produce Savings. GAO/HRD-92-18. Washington, D.C.: April 7, 1992 Food Assistance Programs: Recipient and Expert Views on Food Assistance at Four Indian Reservations. GAO/RCED-90-152. Washington, D.C.: June 18, 1990. Food Assistance Programs: Nutritional Adequacy of Primary Food Programs on Four Indian Reservations. GAO/RCED-89-177. Washington, D.C.: September 29, 1989.
The federal government spends billions of dollars every year on domestic food assistance programs. The U.S. Department of Agriculture administers most of these programs and monitors the prevalence of food insecurity--that is, the percentage of U.S. households that were unable to afford enough food sometime during the year. Other federal agencies also fund food assistance programs; however, comprehensive and consolidated information on the multiple programs is not readily available. Congress asked GAO to examine: 1) the prevalence of food insecurity in the United States, 2) spending on food assistance programs, 3) what is known about the effectiveness of these programs in meeting program goals, and 4) the implications of providing food assistance through multiple programs and agencies. GAO's steps included analyzing food security and program spending data, analyzing studies on program effectiveness, analyzing relevant federal laws and regulations, conducting site visits, and interviewing relevant experts and officials. The prevalence of food insecurity hovered between 10 and 12 percent over the past decade until it rose to nearly 15 percent (or about 17 million households) in 2008. Households with incomes below the poverty line, households headed by single parents, minority households, and those with children had higher than average rates of food insecurity. These households were more likely to report, for example, that they had been hungry, but didn't eat, because there wasn't enough money for food. While some households were able to protect children from the effects of food insecurity, many could not. In more than 4.3 million households, children--as well as adults--were affected by food insecurity sometime during the year. The federal government spent more than $62.5 billion on 18 domestic food and nutrition assistance programs in fiscal year 2008. The five largest food assistance programs--Supplemental Nutrition Assistance Program (SNAP); the National School Lunch Program; the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); the Child and Adult Care Food Program; and the School Breakfast Program--accounted for 95 percent of total spending on the 18 programs. Since 1995 SNAP spending has fluctuated while spending on the other large programs has remained relatively stable. Economic conditions--such as unemployment or poverty--and other factors can affect spending on some programs, particularly SNAP. Research suggests that participation in 7 of the programs we reviewed--including WIC, the National School Lunch Program, the School Breakfast Program, and SNAP--is associated with positive health and nutrition outcomes consistent with programs' goals, such as raising the level of nutrition among low-income households, safeguarding the health and wellbeing of the nation's children, and strengthening the agricultural economy. However, little is known about the effectiveness of the remaining 11 programs because they have not been well studied. Federal food assistance is provided through a decentralized system that involves multiple federal, state, and local organizations. The complex network of 18 food assistance programs emerged piecemeal over the past several decades to meet various needs. Agency officials and local providers told us that the multiple food assistance programs help to increase access to food for vulnerable or target populations. However, the 18 food assistance programs show signs of program overlap, which can create unnecessary work and lead to inefficient use of resources. For example, some of the programs provide comparable benefits to similar target populations. Further, overlapping eligibility requirements create duplicative work for both service providers and applicants. Consolidating programs, however, entails difficult trade-offs. Such actions could improve efficiency and save administrative dollars but could also make it more difficult to achieve the goals of targeting service to specific populations, such as pregnant women, children, and the elderly.
VA is responsible for administering health care and other benefits, such as compensation and pensions, life insurance protection, and home mortgage loan guarantees, that affect the lives of more than 25 million veterans and approximately 44 million members of their families. In providing these benefits and services, VA collects and maintains sensitive medical record and benefit payment information for veterans and their family members. AAC is one of VA’s three centralized data centers. It maintains the department’s financial management and other departmentwide systems, including centralized accounting, payroll, vendor payment, debt collection, benefits delivery, and medical systems. AAC also provides, for a fee, information technology services to other government agencies. As of November 1998, the center either provided or had entered into contracts to provide information technology services, including batch and online processing and workers’ compensation and financial management computer applications, for nine other federal agencies. In fiscal year 1998, the VA's payroll was more than $11 billion and the centralized accounting system processed more than $7 billion in administrative payments. AAC also maintains medical information for both inpatient and outpatient care. For example, AAC systems document admission, diagnosis, surgical procedure, and discharge information for each stay in a VA hospital, nursing home, or domiciliary. In addition, AAC systems contain information concerning each of the guaranteed or insured loans closed by VA since 1944, including about 3.5 million active loans. As one of VA’s three centralized data centers, AAC is part of a vast array of computer systems and telecommunication networks that VA relies on to support its operations and store the sensitive information the department collects in carrying out its mission. The remaining two data centers support VA’s compensation, pension, education, and life insurance benefit programs. In addition to the three centralized data centers, the Veterans Health Administration operates 172 hospitals at locations across the country that operate local financial management and medical support systems on their own computer systems. These data centers and hospitals are interconnected, along with 58 Veterans Benefits Administration regional offices, the VA headquarters office, and customer organizations such as non-VA hospitals and medical universities, through a wide area network. All together, VA’s network services over 700 locations nationwide, including Puerto Rico and the Philippines. Our objective was to evaluate and test the effectiveness of information system general controls over the financial systems maintained and operated by VA at AAC. General controls, however, also affect the security and reliability of nonfinancial information, such as veteran medical and loan data, maintained at this processing center. Specifically, we evaluated information system general controls intended to protect data, files, programs, and equipment from unauthorized access, modification, and destruction; prevent the introduction of unauthorized changes to application and provide adequate segregation of duties involving application programming, system programming, computer operations, security, and quality assurance; ensure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure that an effective computer security planning and management program is in place. We restricted our evaluation to AAC because VA's Office of Inspector General was planning to review information system general controls for fiscal year 1998 at the Hines and Philadelphia benefits delivery centers. To evaluate information system general controls, we identified and reviewed AAC's general control policies and procedures. We also tested and observed the operation of information system general controls over AAC's information systems to determine whether they were in place, adequately designed, and operating effectively. In addition, we determined the status of previously identified computer security weaknesses, but did not perform any follow-up penetration testing. We performed our review from October 1998 through March 1999, in accordance with generally accepted government auditing standards. Our evaluation was based on the guidance provided in our Federal Information System Controls Audit Manual (FISCAM) and the results of our May 1998 study of security management best practices at leading organizations. After we completed our fieldwork, the director of AAC provided us with updated information regarding corrective actions. We did not verify these corrective actions but plan to do so as part of future reviews. VA provided us with written comments on a draft of this report, which are discussed in the “Agency Comments” section and reprinted in appendix I. AAC has made substantial progress in addressing the computer security issues we previously identified. At the time of our review in 1998, AAC had corrected 40 of the 46 weaknesses that we discussed with the director of AAC and summarized in our September 1998 report on VA computer security. AAC had addressed most of the access control, system software, segregation of duties, and service continuity weaknesses we identified in 1997 and had improved computer security planning and management. For example, AAC had reduced the number of users with access to the computer room, restricted access to certain sensitive libraries, audit information, and established password and dial-in access controls, developed a formal system software change control process, expanded tests of its disaster recovery plan, and established a centralized computer security group. AAC was also proactive in addressing additional computer security issues we identified during our current review. We identified a continuing risk of unauthorized access to financial and sensitive veteran medical and benefit information because the center had not fully implemented a comprehensive computer security planning and management program. If properly designed, such a program should identify and correct the types of additional access control and system software weaknesses that we found. In addition, AAC risks certain types of unauthorized access not being detected because it had not completely corrected the user access monitoring weaknesses we previously identified. Our May 1998 study of security management best practices found that a comprehensive computer security planning and management program is essential to ensure that information system controls work effectively on a continuing basis. Under an effective computer security planning and management program, staff (1) periodically assess risks, (2) implement comprehensive policies and procedures, (3) promote security awareness, and (4) monitor and evaluate the effectiveness of the computer security environment. In addition, a central security staff is important for providing guidance and oversight for the computer security planning and management program to ensure an effective information system control environment. AAC had established a solid foundation for its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness. However, AAC had not yet instituted a framework for continually assessing risks or routinely monitoring and evaluating the effectiveness of information system controls. In March 1999, the director of AAC told us that the center plans to expand its computer security planning and management program to include these aspects. In addition, the director told us that AAC had augmented its security management organization by hiring two additional security experts in May 1999. A comprehensive computer security planning and management program should provide AAC with a solid foundation for ensuring that appropriate controls are designed, implemented, and operating effectively. Periodically assessing risk is an important element of computer security planning because it provides the foundation for the other aspects of computer security management. Risk assessments not only help management determine which controls will most effectively mitigate risks, but also increase awareness and, thus, generate support for adopted policies and controls. An effective risk assessment framework generally includes procedures that link security to business needs and provide for continually managing risk. VA policy requires that risk assessments be performed when significant changes are made to a facility or its computer systems, but at least every 3 years. AAC had not formally reassessed risk since 1996 even though significant changes to the facility and its systems had occurred. For example, AAC management told us that the center had replaced its mainframe computer, implemented a new mainframe operating system, and expanded the facility to accommodate a VA finance center in 1998. Although the director of AAC told us in March 1999 that changes in computer security risks were considered by implementation teams responsible for these events, documentation of such considerations were not available. Formal risk assessments should be performed for such significant changes. The director of AAC also told us that management would perform a risk assessment later in 1999 to comply with VA policy. One reason that AAC had not formally assessed risks when these significant changes occurred was that the center had not developed a framework for assessing and managing risk on a continuing basis. In March 1999, the director of AAC told us that a risk assessment framework would be developed and added to the AAC security handbook. According to the director, this planned risk assessment framework will define the types of changes that require a risk assessment; specify risk assessment procedures that can be adapted to different indicate who should conduct the assessment, preferably a mix of individuals with knowledge of business operations, security controls, and technical aspects of the computer systems involved; and describe requirements for documenting the results of the assessment. In addition to assessing risk to identify appropriate controls, it is also important to determine if the controls in place are operating as intended to reduce risk. Our May 1998 study of security management best practices found that an effective control evaluation program includes processes for (1) monitoring compliance with established information system control policies and guidelines, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities. AAC had not established a program to routinely monitor and evaluate the effectiveness of information system controls. Such a program would allow AAC to ensure that policies remain appropriate and that controls accomplish their intended purpose. Although AAC had substantially corrected previously identified computer security weaknesses, we tested additional access and system software controls and found weaknesses that posed risks of unauthorized modification, disclosure, or destruction of financial and sensitive veteran medical and benefit information. These weaknesses included inadequately limiting access of authorized users to sensitive data and programs, maintaining the system software environment, and reviewing network security. Several of these weaknesses could have been identified and corrected if AAC had been monitoring compliance with established procedures. For example, periodically reviewing AAC user access authority to ensure that it was limited to the minimum required access level based on job requirements would have allowed AAC to discover and fix the types of additional access control weaknesses we identified. Likewise, routinely evaluating the technical implementation of its system software would have permitted AAC to eliminate or mitigate the additional system software exposures we identified. A program to regularly test information system controls would also have allowed AAC to detect additional network security weaknesses. For example, using network analysis software designed to detect network vulnerabilities, we determined that intrusion attempts on 2 of the 10 network access control paths would not be detected. Although AAC fixed this problem before our fieldwork was completed, AAC staff could have identified and corrected this exposure using similar network analysis software available to them. AAC staff told us that they also plan to begin evaluating the intrusion detection system periodically. In addition, AAC had not established a process to test network security when major changes to the network occur. Although AAC had used network analysis software to detect network vulnerabilities earlier in October 1998, we determined that both a production and a development network system had a system program with vulnerabilities commonly known to the hacker community. These vulnerabilities could have provided the opportunity to bypass security controls and gain unlimited access to AAC network systems. Although AAC staff determined that the vulnerable programs were no longer needed and deleted them before our fieldwork was completed, these vulnerabilities could have been prevented had network security been reassessed when the network environment changed. AAC was also not adequately monitoring certain user access activity. A comprehensive user access monitoring program would include routinely reviewing user access activity to identify and investigate both failed attempts to access sensitive data and resources and unusual or suspicious patterns of successful access to sensitive data and resources. Such a program is critical to ensuring that improper access to sensitive information would be detected. Because the volume of security information available is likely to be too voluminous to review routinely, the most effective monitoring efforts are those that selectively target unauthorized, unusual, and suspicious patterns of access to sensitive data and resources, such as security software, system software, application programs, and production data. AAC had begun reviewing failed attempts to access sensitive data and resources, but had not established a program to monitor successful access to these resources for unusual or suspicious activity. In March 1999, the director of AAC told us that the center is expanding its user access activity monitoring to identify and investigate unusual or suspicious patterns of access to sensitive resources, such as updates to security files that were not made by security staff, changes to sensitive system files that were not performed by system modifications to production application programs that were not initiated by production control staff, revisions to production data that were completed by system or deviations from normal patterns of access to sensitive veteran medical and benefit data. In addition to the access activity monitoring and computer security program planning and management weaknesses that remain open from 1997, we identified 16 additional issues during our 1998 review. For example, AAC had not restricted access to certain sensitive data and programs based on job routinely reviewed access authorities granted to employees to ensure that they were still appropriate, adequately reviewed certain components of its operating system to ensure continued system integrity, adequately documented changes to network servers, documented testing of certain emergency changes to its financial issued technical security standards for maintaining the integrity of system and security software for certain operating system environments. AAC had corrected 6 of the 16 additional issues identified in 1998 before we completed our site visit in Austin. Addressing the remaining additional issues should help AAC ensure that an effective computer security environment is achieved and maintained. We discussed these issues with AAC management and staff and were told that they would be addressed by September 1999. AAC had made substantial progress in improving information system general controls. In addition to correcting most of the access control, system software, segregation of duties, and service continuity weaknesses we had previously identified, AAC had strengthened its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness. Until AAC completes implementing its computer security planning and management program by establishing a framework for continually assessing risks and routinely monitoring and evaluating the effectiveness of information system controls, it will not have adequate assurance that appropriate controls are established and operating effectively. We identified additional access, system software, and application change control weaknesses that continued to place financial and sensitive veteran medical and benefit information on AAC systems at risk of improper modification, disclosure, or destruction and assets at risk of loss. Unauthorized access may not be detected because AAC had not begun identifying and investigating unusual or suspicious patterns of successful access to sensitive data and resources. AAC could have identified and corrected these types of weaknesses, which could also adversely affect other agencies that depend on AAC for computer processing support, had it fully implemented an effective computer security planning and management program. We recommend that the Acting VA Chief Information Officer (CIO) work with the director of AAC to implement policies and procedures for assessing and managing risk on a establish processes for (1) monitoring compliance with established information system control policies and procedures, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities; and expand the center’s user access activity monitoring program to identify and investigate unusual or suspicious patterns of successful access to sensitive data and resources for unauthorized access. We also recommend that the Acting VA CIO coordinate with the director of AAC to ensure that the remaining computer security weaknesses are corrected. These weaknesses are summarized in this report and detailed in a separate report, which is designated for “Limited Official Use,” also issued today. In commenting on a draft of this report, VA agreed to implement our recommendations by September 30, 1999. Specifically, VA stated that AAC would update its security handbook to include a risk assessment framework, establish a program to routinely monitor and evaluate the effectiveness of controls, and complete procedures for monitoring successful access to sensitive computer resources by the end of September 1999. VA also informed us that AAC had taken action to correct all but three of the other weaknesses we identified and plans to address the remaining weaknesses by September 30, 1999. Within 60 days of the date of this letter, we would appreciate receiving a statement on actions taken to address our recommendations. We would like to thank AAC for the courtesy and cooperation extended to our audit team. We are sending copies of this report to Senator Arlen Specter, Senator Ted Stevens, Senator Robert C. Byrd, Senator Fred Thompson, Senator Joseph Lieberman, Senator John D. Rockefeller IV, Representative C. W. Bill Young, Representative Lane Evans, III, Representative Bob Stump, Representative David Obey, Representative Dan Burton, and Representative Henry A. Waxman in their capacities as Chairmen or Ranking Minority Members of Senate and House Committees. We are also sending copies to Togo D. West, Jr., Secretary of Veterans Affairs and the Honorable Jacob J. Lew, Director of the Office of Management and Budget. In addition, copies will be made available to others upon request. If you have any questions or wish to discuss this report, please contact me at (202) 512-3317. Major contributors to this report are listed in appendix II. David W. Irvin, Assistant Director Debra M. Conner, Senior EDP Auditor Shannon Q. Cross, Senior Evaluator Charles M. Vrabel, Senior EDP Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO assessed the effectiveness of information system general controls at the Department of Veterans Affairs' (VA) Austin Automation Center (AAC). GAO noted that: (1) AAC had made substantial progress in correcting specific computer security weaknesses that GAO identified in its previous evaluation of information system controls; (2) AAC had established a solid foundation for its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness; (3) however, AAC had not yet established a framework for continually assessing risks and routinely monitoring and evaluating the effectiveness of information system controls; (4) GAO also identified additional computer security weaknesses that increased the risk of inadvertent or deliberate misuse, fraudulent use, improper disclosure, and destruction of financial and sensitive veteran medical and benefit information on AAC systems; (5) an effective computer security planning and management program would have allowed AAC to identify and correct the types of additional weaknesses that GAO identified; (6) in addition, AAC continues to run the risk that unauthorized access may not be detected because it had not established a program to identify and investigate unusual or suspicious patterns of successful access to sensitive data and resources; (7) these weaknesses could also affect other agencies that depend on AAC information technology services; (8) AAC was very responsive to addressing new security exposures identified and corrected several weaknesses before GAO's fieldwork was completed; (9) the Acting Assistant Secretary for Information Technology said VA would implement all of GAO's recommendations by September 30, 1999; and (10) addressing the remaining issues will help ensure that an effective computer security environment is achieved and maintained.
Ensuring access to postsecondary education while reducing vulnerability of aid programs to fraud, waste, abuse, and mismanagement is one of the key management challenges Education faces. Education helps millions of students enroll in higher education programs by providing for more than $50 billion in grants and loans annually. The department is responsible for ensuring that these programs are efficiently managed, establishing procedures to ensure that loans are repaid, and preventing fraud and abuse. Since 1990, we have identified Education’s grant and loan programs as high risk for fraud, waste, abuse, and mismanagement. Both Education and Congress have made changes to address management challenges in the student financial aid programs. Congress established Education’s Office of Federal Student Aid (FSA) as a performance-based organization in 1998. Its purpose is to increase accountability of officials, provide greater flexibility in management, integrate information systems, reduce costs, and develop and maintain a system that contains complete, accurate and timely data that can ensure program integrity. In 2001, Education established a Management Improvement Team (MIT) of senior managers to formulate strategies to address key management problems throughout the department. According to Education, MIT has developed a system to identify, track, and resolve audit and management issues both agencywide and in the student financial aid programs. Education has faced challenges in four areas related to its grant and loan programs. These are (1) financial aid system integration issues, (2) fraud and error in student aid application and disbursement processes, (3) defaulted student loans, and (4) human capital management. I would now like to briefly discuss each of these challenges. Education has spent millions of dollars to integrate and modernize its many financial aid systems in an effort to provide more information and better service to students, parents, institutions, and lenders. Effectively and efficiently investing in information technology requires, among other things, an institutional blueprint that defines in both business and technical terms the organization’s current and target operating environments and provides a transition road map. Because Education did not have this blueprint, commonly called an enterprise architecture, we recommended in 1997 that the department develop an architecture and establish standard reporting formats and data definitions. In September 2002, Education’s Office of the Inspector General (OIG) reported that the department had made progress in taking specific actions to lay the groundwork for an enterprise architecture. Still, critical elements need to be completed, including integrating separate architectures into a departmentwide architecture and fully implementing common identifiers for students and institutions to use in departmentwide system applications. Education is planning to brief us shortly about the department’s enterprise architecture and progress it has made. Also, in April 2002, we recommended that FSA and the department develop and include clear goals, strategies, and measures to better demonstrate its progress in implementing plans for integrating its financial aid systems in FSA’s performance plans and subsequent performance reports. With respect to modernization plans, we reported in November 2001 that FSA selected a viable, industry-accepted means of integrating its existing data on student loans and grants. FSA has made progress in implementing this approach for its Common Origination and Disbursement process, which includes the implementation of a common record that institutions can use to submit student financial aid for Pell Grant and Direct Loan programs. The ultimate success of this process, however, hinges on addressing serious postimplementation operational problems and helping thousands of schools implement the common record. Further, as we reported in December 2002, FSA has not completed a number of elements that are important to managing any information technology investment. These include determining whether expected benefits are being achieved and tracking lessons learned related to schools’ implementation of the common record. We have recommended that FSA develop metrics, baseline data, and a tracking process for certain benefits expected from the system, and that they develop and implement a process for capturing and disseminating lessons learned to schools that have not yet implemented the common record. FSA has begun to act on both of these issues. Education has also faced challenges in ensuring that information reported on student aid applications is correct and that adequate internal controls are in place to prevent improper payments of grants and loans. The department has taken steps, in two pilot programs with the Internal Revenue Service (IRS), to match income reported on student aid applications with federal tax returns. To continue this income match and implement it on a broader scale, legislation to allow the IRS to release the information is necessary. Education has worked with the Department of the Treasury and the Office of Management and Budget to ask that the Congress enact such legislation. The department also verifies income information by asking 30 percent of applicants to provide copies of their tax returns to their student financial aid offices. In addition to strengthening its controls over student aid applications, we found that Education also needed to address institutions that were disbursing grants to ineligible students. The department has taken steps to analyze student data to identify high concentrations of students over 65 and eligible noncitizens at individual institutions to determine whether problems exist that warrant further review. These actions are encouraging, and if properly implemented, should improve controls over these payments. A continuing challenge for Education and FSA is preventing and collecting defaulted student loans. While the national student loan default rate has decreased from 11.6 percent in fiscal year 1993 to 5.9 percent in fiscal year 2000, the cumulative amount of defaulted student loans has increased by almost $10 billion over the same period. Education and FSA have implemented several default management strategies, such as establishing electronic debiting as a repayment option, and working with some guaranty agencies to set up alternatives to service and process claims for defaulted loans. Our analysis of FSA’s internal documents indicated that for fiscal years 2000 through 2002, FSA met or exceeded many of the goals related to these strategies. However, neither Congress nor the public can determine whether FSA’s default management goals have been met because Education did not prepare performance reports that conform to the requirements in the Higher Education Act. FSA’s report to Congress on its performance in fiscal years 2000 and 2001 was not timely nor did it indicate whether or not FSA met established performance goals. We have recommended that Education and FSA prepare and issue reports to Congress on FSA’s performance that are timely and clearly identify whether performance goals were met. Like other federal agencies, Education must address serious human capital issues, such as succession planning, because about one-third of Education’s workforce is eligible to retire. In June 2001, we recommended that the department develop human capital goals and measures for its performance plans. In April 2002, we recommended that the department and FSA coordinate closely to develop and implement a comprehensive human capital strategy. Education added a specific objective to its strategic plan, and in 2002, issued a comprehensive 5-year human capital plan that incorporates FSA. This plan outlines steps and time frames for improving human capital management and specifies four critical areas where improvements should be made: (1) top leadership commitment, (2) performance management, (3) workforce skills enhancement, and (4) leadership and succession planning. It will be important that Education focus continually on implementation of the plan to achieve results. Now, Mr. Chairman, I would like to discuss Education’s financial management challenges and the progress they have made in addressing them. Financial Management Weaknesses in Education’s financial management and information systems have limited its ability to achieve one of its key goals—improving financial management to help build a high-performing agency. Significant progress towards this goal was made recently when Education received an unqualified—or “clean”—opinion on its financial statements. Prior to this, with the exception of 1997, Education had not received a clean opinion since its first agencywide audit in 1995. While this is an important milestone for the department, significant management weaknesses remain that must be addressed for Education to meet its goal in this area. Beginning with the department’s first agencywide audit in 1995, Education’s auditors have repeatedly identified significant financial management weaknesses. These weaknesses included Education’s inability to provide the auditors with sufficient evidence to satisfy themselves about the accuracy or completeness of certain amounts included in the financial statements, including billions of dollars of adjustments to amounts reported in previous years’ financial statements. According to Education’s auditor, these adjustments were to correct “unnatural account balances” or otherwise adjust balances to the amount management’s analysis supported. The auditor reported that in many cases, the cause of the incorrect balances could not be definitively determined, and the adjusting entry prepared by management was a reasoned judgment of how to correct its accounts. Education’s auditors have also consistently reported major internal control weaknesses related to financial management systems and financial reporting. These weaknesses included (1) the absence of a fully integrated financial management system, (2) deficiencies in financial management practices that require extensive analysis of accounts to resolve errors through manual adjustments, (3) the lack of a rigorous review of interim financial data for timely identification and correction of errors, (4) the inability to accumulate, analyze, and present reliable financial information in the form of financial statements, (5) the dependence on a variety of stopgap measures to prepare financial statements, (6) the insufficiency of compensating controls, such as top-level reviews to address and to seek to compensate for systemic control weaknesses, and (7) the lack of a review to identify and quantify improper payments. Education’s auditors also reported that internal controls needed strengthening in numerous areas relating to Education’s investment of millions of dollars in property and equipment. Education has taken actions over the last several years to improve its financial management and to address the weaknesses identified. For example, during 2001, Education’s MIT developed specific actions to address issues raised in previous financial statement audits. According to a MIT report on its accomplishments, Education began performing certain critical reconciliations on a monthly basis and began preparing interim financial statements, which helped identify areas needing further study. Education also improved its internal controls over property and equipment, and its auditor did not report this area as a weakness in fiscal year 2002. In addition, according to Education’s auditor, during fiscal year 2002, the department implemented a new general ledger software package and FSA implemented a new financial management system to support their management information reporting needs. The auditor also reported that the department implemented several processes during fiscal year 2002 to improve its financial management, including convening the Accounting Integrity Board, the Audit Steering Committee, and the Accounting Assurance Group to plan, implement and manage quality accounting change control; establishing the Financial Statement Committee and continuing the Financial Statement Preparation Team and other special task force teams all of which are designed to improve the financial statement processes; and developing and implementing reconciliation work plans, policies and procedures, specialized teams and regular management reviews of the final work products as well as management review for process improvement. While Education has made progress in addressing many of its weaknesses, in fiscal year 2002, the auditors again reported that significant financial management issues continued to impair the department’s ability to accumulate, analyze, and present reliable financial information. These problems, in part, resulted from inadequate internal controls over Education’s financial management systems and financial reporting process. The auditor also reported that weaknesses in the department’s ability to report accurate financial information on a timely basis were due to deficiencies in certain of the department’s financial management practices, including inadequate reconciliations and account analysis early in fiscal year 2002. The auditor added that issues associated with the transition to a new financial management system in fiscal year 2002 also contributed to the department’s difficulties in these areas. While the auditor reported that it noted improvements in the latter part of the fiscal year, it reported that it continues to believe that the department needs to place additional focus on reconciliation procedures, account analysis, and financial reporting. Until these issues are fully resolved, Education’s ability to produce timely, accurate, and useful financial information for its managers and stakeholders will be greatly impeded. In addition, beginning with fiscal year 2004, Education and other major government agencies will be required to produce audited financial statements within 45 days after the end of the fiscal year compared to 120 days for fiscal years 2002 and 2003. Education will need to continue to focus strongly on resolution of its financial management deficiencies in order to be in a position to meet these new reporting deadlines. As we testified before this Subcommittee in April 2002, we identified other internal control weaknesses that make Education vulnerable to improper payments and lost assets. In our testimony and related report, we stated that for May 1998 through September 2000, weak internal controls over the (1) grants and loan disbursement process failed to detect certain improper payments, (2) third party draft processes increased Education’s vulnerability to improper payments, and (3) government purchase cards resulted in some fraudulent, improper, and questionable purchases. We also reported that Education lacked adequate internal controls over computers acquired with purchase cards and third party drafts. Among other things, we found that computer purchases valued at almost $400,000 were not recorded in Education’s property records, and $200,000 of that computer equipment could not be located. In response to our work, Education made several changes to its policies and procedures to improve internal controls and program integrity. These changes were a step in the right direction; but in many cases, our follow-up work indicated that they had not been effectively implemented. In March 2002, we reported that vulnerabilities remained in all areas we reviewed, except for third party drafts, which were discontinued altogether. For example, we reported that Education developed a new approval process for its purchase card program; however, our testing of 3 months of purchase card statements under the new process found that over 20 percent lacked proper support for the items purchased. In October 2002, Education told us that new policies and procedures were implemented and aimed at reducing the department’s vulnerability to future improper use of purchase cards. These new policies and procedures relate to reviewing and approving purchase card transactions and providing related training. Further, the department told us that misuse of purchase and travel cards is now specifically included in the department’s Table of Penalties with the desired effect of reducing misuse and abuse of government issued credit cards. Education also told us that it recognizes that reviewing and improving internal controls is an ongoing task and that it intends to remain vigilant in this area. These are positive steps that should help reduce the instances of improper purchases. Finally, Education will need to continue its actions in addressing weaknesses in its financial management information systems. The Federal Financial Management Improvement Act (FFMIA) of 1996 requires agencies to institute financial management systems that substantially comply with federal financial management systems requirements, applicable accounting standards, and the federal government’s Standard General Ledger. Every year since FFMIA was enacted, Education’s auditors have reported that Education’s systems did not substantially comply with the act’s requirements. In previous years, the auditors reported that without a fully integrated financial management system, deficiencies in the general ledger system, deficiencies in the manual adjustment process, and the need to strengthen other financial management controls such as reconciliation processes, collectively impair Education’s ability to accumulate, analyze, and present reliable financial information. In addition, according to Education’s auditor, although the department implemented a new financial management system during fiscal year 2002, issues associated with the transition to the new system contributed to difficulties in providing reliable, timely information for managing current operations and timely reporting of financial information to central agencies; therefore, Education still did not substantially comply with FFMIA’s requirements. Education also needs to address identified computer security weaknesses in its financial management and other information systems. In September 2001, we reported that Education had made progress in correcting certain information system control weaknesses. At the same time, we identified weaknesses in Education’s systems that place critical financial and sensitive grant information at risk of unauthorized access and disclosure, and key operations at risk of disruption. We recommended that Education correct certain information system control weaknesses and fully implement a comprehensive departmentwide computer security management program. In response, Education stated that it had developed a corrective action plan and is taking steps to further strengthen and develop a more comprehensive information security program. In addition, Education’s auditor reported that for fiscal year 2002, the department made progress in strengthening controls over its information technology processes, but needs to continue efforts to develop, implement, and maintain an agencywide risk-based information security plan, programs, and practices to provide security throughout the life cycle of all systems. In closing, Chairman, I want to reiterate that Education is taking actions and making substantial progress in addressing major challenges related to its student aid programs and financial management. At the same time, some very difficult issues remain that must be resolved before Education is able to produce relevant, reliable, and timely information to efficiently and effectively manage the department and provide full accountability to its stakeholders. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other members of the Subcommittee may have.
In its 2003 performance and accountability report on the Department of Education, GAO identified challenges in, among other areas, student financial aid programs and financial management. The information GAO presents in this testimony is intended to assist Congress in assessing Education's progress in addressing and overcoming these challenges. Education has taken steps to address its continuing challenges of reducing vulnerabilities in its student aid programs and improving its financial management, such as establishing a senior management team to address management problems, including financial management, throughout the agency. And, while Education has made significant progress, weaknesses remain that will require the continued commitment and vigilance of Education's management to resolve. Reduce vulnerability of student aid programs to fraud, waste, abuse, and mismanagement: Education has made considerable changes to address the ongoing challenges in administering its student aid programs. However, Education needs to continue to address systems integration issues, reduce fraud and error in student aid application and disbursement processes, collect on student loan defaults, and improve its human capital management. Improve financial management: Education has implemented many actions to address its financial management weaknesses. Significant progress was made recently when Education received an unqualified--or "clean"--opinion on its financial statements for fiscal year 2002. While this is an important milestone for the department, internal control and systems weaknesses remain that impede Education's ability to produce timely, accurate, and useful financial information for its managers and stakeholders.
The national airspace system is a complex, interconnected, and interdependent network of systems, procedures, facilities, aircraft, and people that must work together to ensure safe and efficient operations. DOT, FAA, airlines, and airports all affect the efficiency of national airspace system operations. DOT works with FAA to set policy and operating standards for all aircraft and airports. As the agency responsible for managing the air traffic control system, FAA has the lead role in developing technological and other solutions that increase the efficiency and capacity of the national airspace system. FAA also provides funding to airports. The funding that airports receive from FAA for airport improvements is conditioned on open and nondiscriminatory access to the airlines and other users, and the airlines are free to schedule flights at any time throughout the day, except at airports that are subject to limits on scheduled operations. The airlines can also affect the efficiency of the airspace system through the number and types of aircraft that they choose to operate. As we previously reported, achieving the most efficient use of the capacity of the aviation system is difficult because it depends on a number of interrelated factors. The capacity of the aviation system is affected not only by airports’ infrastructure, including runways and terminal gates, but at any given time, can also be affected by such factors as weather conditions, resulting in variation in available airport capacity. For example, some airports have parallel runways that can operate simultaneously in good weather but are too close together for simultaneous operations in bad weather, a fact that reduces the number of aircraft that can take off and land. Another factor affecting capacity, apart from the capacity of individual airports, is the number of aircraft that can be safely accommodated in a given portion of airspace. If too many aircraft are trying to use the same airspace, some may be delayed on the ground and/or en route. Achieving the most efficient use of the national aviation system is contingent on a number of factors, among them the procedures and equipment used by FAA, the proficiency of the controllers to efficiently use these procedures and equipment to manage traffic, and whether and in what ways users are charged for the use of the airspace and airports. DOT and FAA can address flight delays primarily through enhancing and expanding capacity and implementing demand management measures. Capacity improvements: Capacity improvements can be in the form of expanding capacity or enhancing existing capacity in the system. Expanding capacity includes the addition of new runways, taxiways, and other infrastructure improvements, which can reduce delays by increasing the number of aircraft that can land and depart and provide an airport with more flexibility during high-demand periods and inclement weather. Enhancing capacity includes improvements in air traffic control procedures or technologies that increase the efficiency of existing capacity thereby reducing delays and maximizing the number of takeoffs and landings at an airport. Demand management measures: Examples include using administrative measures or economic incentives to change airline behavior. Administrative measures include DOT issuing limits on hourly operations at specific airports, while economic incentives include FAA’s amended policy on rates and charges that clarified the ability of airport operators to charge airlines landing fees that differ based on time of day. FAA’s actions to address flight delays are outlined in the agency’s strategic and annual business plans and the NextGen Implementation Plan. FAA’s 2009-2013 strategic plan, titled the Flight Plan, provides a 5-year view of the agency’s goals, related performance measures, and actions to achieve those goals. FAA’s Flight Plan and related annual business plans include four primary goals: Increased Safety, Greater Capacity, International Leadership, and Organizational Excellence. FAA’s goal of greater capacity is to “work with local governments and airspace users to provide increased capacity and better operational performance in the U.S. airspace system that reduces congestion and meets projected demand in an environmentally sound manner.” As part of this goal, FAA has outlined three objectives, one of which is to increase the reliability and on-time performance of the airlines. FAA’s progress toward meeting this goal is measured by its ability to achieve a national airspace system on-time arrival rate of 88 percent at the 35 OEP airports and maintain that level through 2013. Additionally, FAA’s Flight Plan and annual business plans assign actions across the agency—within FAA’s Air Traffic Organization and Office of Airports—to achieve this and other Flight Plan goals. In addition to outlining actions in FAA’s Flight Plan, the agency also issues an annual NextGen Implementation Plan that provides an overview of FAA’s ongoing transition to NextGen and lays out the agency’s vision for NextGen, now and into the midterm (defined as 2012 to 2018). The plan moreover identifies FAA’s goals for NextGen technology and program deployment and commitments made in support of NextGen. Recognizing the importance of near-term and midterm solutions, FAA requested that RTCA, Inc.—a private, not-for-profit corporation that develops consensus- based recommendations on communications, navigation, surveillance, and air traffic management system issues—create a NextGen Midterm Implementation Task Force to reach consensus within the aviation community on how to move forward with NextGen. The latest version of the NextGen Implementation Plan, issued in March 2010, incorporated the task force’s recommendations, which identified operational improvements that can be accelerated between now and 2018. FAA’s actions described in these plans are designed not only to reduce delays, but can also improve safety, increase capacity, and reduce aviation’s environmental impact. Although these actions might reduce delays, flight delays can also be affected by factors generally outside FAA’s control, such as airline scheduling and business practices. For example, some airline business models rely on tight turnaround times between flights, which could make it more likely that flights scheduled later in the day are delayed. Additionally, except at slot-controlled airports, airlines can schedule flights at any time throughout the day without consideration of the extent to which the number of scheduled flights during a particular time period might exceed the airport’s available capacity. DOT and FAA collect information on aviation delays through three primary databases—Airline Service Quality Performance (ASQP), Aviation System Performance Metrics (ASPM), and OPSNET. As table 1 shows, these databases vary in their purposes, scope, and measurement of delays. Figure 1 illustrates FAA facilities that control and manage air traffic over the United States and how each database captures points where flights could be delayed. For example, ASQP and ASPM measure delays against airlines’ schedules or flight plans, while OPSNET measures delays that occurred while an aircraft is under FAA’s control. The difference in how delays are measured in these data sets will result in some flights being considered delayed in one database but not in another, and vice versa. For example, a delay relative to an airline’s schedule can occur if a flight crew is late, causing the flight to leave the gate 15 minutes or more behind schedule, which would be reported as a delay in the ASQP and ASPM databases. If that flight, once under FAA control, faces no delay in the expected time it should take taxiing to the runway and lifting off as well as traveling to the destination airport, it would not be reported as a delayed flight in OPSNET, even if it reaches the gate at the destination airport late, relative to its scheduled arrival time. Conversely, a flight could be ready to take off on time, suffering no departure delay in pushing back from the gate. However, if once under FAA control, the flight is held on the ground at the departure airport by more than 15 minutes because of an FAA facility instituting a traffic management initiative in response to weather conditions, increased traffic volume, or other conditions, it will be recorded as experiencing an OPSNET delay—even if, relative to the airline’s schedule, it is actually able to reach the gate at the destination airport within 15 minutes of its scheduled arrival time. The percentage of delayed arrivals has decreased systemwide since 2007, according to ASQP data. As shown in figure 2, in 2009, about 21 percent of flights were delayed systemwide—that is, arrived at least 15 minutes late at their destination or were canceled or diverted—representing a decrease of 6 percentage points from 2007. Arrival delay times have also decreased systemwide since 2007 (fig. 2). Average delay times for delayed arrivals decreased by about 2 minutes— from 56 minutes in 2007 to 54 minutes in 2009. However, there was a 1- minute increase in average arrival delay time from 2007 to 2008, likely because of the slight increase in the percentage of arrivals delayed 3 hours or more from 2007 to 2008. As figure 3 shows, in 2009, about 41 percent of delayed arrivals had delays of less than 30 minutes. Also, the percentage of arrivals delayed more than 30 minutes decreased from 2007 through 2009. In addition to the decrease in arrivals delayed more than 30 minutes, the number of flights experiencing tarmac delays of over 3 hours also decreased—from 1,654 flights in 2007 (0.02 percent of total flights) to 903 flights in 2009 (0.01 percent of total flights). As of April 29, 2010, DOT requires airlines to, among other things, adopt contingency plans for tarmac delays of more than 3 hours that must include, at a minimum, making reasonable accommodations (i.e., offer food, water, or medical services) during such delays. Failure to comply will be considered an unfair or deceptive practice and may subject the airline to enforcement action and a fine of up to $27,500 per violation. See appendix II for trends in long tarmac delays from 2000 through 2009. The percentage of delayed arrivals also decreased across almost all of the 34 OEP airports since 2007, according to ASPM data, although the declines varied by airport. As shown in figure 4, such decreases ranged from about 3 percentage points to 12 percentage points. For example, New York’s LaGuardia (LaGuardia) and John F. Kennedy International (JFK) airports registered decreases of about 10 percentage points—to 28 percent and 26 percent in 2009, respectively. Arrival delays at Newark Liberty International (Newark) decreased about 5 percentage points, to about 32 percent in 2009. An increase in delayed arrivals at Atlanta Hartsfield International (Atlanta) occurred between 2008 and 2009, primarily driven by an increase in the number of scheduled flights and the extent of the peaks in scheduled flights throughout the day. Although Atlanta experienced a 0.6 percentage point decrease in the number of delayed arrivals from 2007 to 2008, the percentage of delayed arrivals increased 2.5 percentage points from 2008 through 2009—to about 27 percent. According to FAA analysis, the average number of scheduled flights exceeded the airport’s average called rate—that is, the number of aircraft that an airport can accommodate in a quarter hour given airport conditions—for more periods in March 2009 than in March 2008, demonstrating how changes in the airlines’ schedules likely contributed to Atlanta’s increased delays. Fewer flights since 2007, because of a downturn in passenger demand and airline cuts in capacity, have likely been the largest contributor to the decrease in delayed arrivals. FAA, airport, and airline officials that we spoke with attributed the majority of improvements in delays to the systemwide reduction in the number of flights. As shown in figure 5, trends in the percentage of delayed arrivals appear to generally track with trends in the number of arrivals. For example, when the number of total arrivals in the system decreased 7 percent from 2000 through 2002, the percentage of delayed arrivals decreased systemwide by 7 percentage points, according to DOT data. To corroborate FAA and stakeholder views on the relationship between the recent reductions in flights and declines in delays, we performed a correlation analysis between the number of total arrivals and delayed arrivals. This analysis found a significant correlation between these two factors, confirming the various stakeholders’ views that the recent decrease in flights from 2007 through 2009, therefore, is likely a significant driver of the decrease in delays. Recent runway improvements also helped reduce delays at some airports. As shown in table 2, from 2007 through 2009, new runways at Chicago O’Hare International (Chicago O’Hare), Seattle-Tacoma International (Seattle), and Washington Dulles International (Washington Dulles) and a runway extension in Philadelphia International (Philadelphia) have opened. According to project estimates, the new runway projects are expected to provide these airports with the potential to accommodate over 320,000 additional flights annually and decrease the average delay time per operation by about 1 minute to 3.5 minutes at these airports. For example, since 2007, Chicago O’Hare has seen the largest decrease in the percentage of arrivals delayed for the 34 OEP airports, according to FAA data, and some of this improvement is likely because of the new runway. In examining Chicago O’Hare’s called rates, we found that after Chicago O’Hare’s new runway opened in the summer of 2009, the airport had the potential to accommodate, on average, about 9 percent more flights than it had been able to handle in the summer of 2008. According to FAA officials, the new runway allowed Chicago O’Hare to accommodate an additional 10 to 16 arrivals per hour because of additional options with respect to its runway configuration. More importantly, this increased capacity helps reduce delays the most when an airport is constrained because of, for example, weather or runway construction. For example, Chicago O’Hare’s new runway allows it to accommodate 84 arrivals per hour during poor weather, whereas prior to the new runway, it could accommodate only 68 to 72 arrivals in such weather. This increased capacity results in fewer delayed flights during bad weather. However, not all of the reduction in delayed arrivals can be attributed to the new runways because another key factor—the decline in the number of flights—also helped to reduce delays. According to FAA officials, FAA does not analyze the extent to which the estimated delay benefits are realized once a runway is opened because delay reduction is expected. They also noted that measuring the benefits of these projects is difficult because a myriad of factors, such as the installation of new technologies or procedures or changes in airline schedules, may also affect the number of flights and delays at an airport, making it difficult to isolate the benefits of the new runway. More notably, the recent drop in the number of flights was outside the bounds of FAA’s analysis of these projects’ delay estimates, making it is difficult to determine the actual realized benefits. Despite these challenges, by not measuring the actual benefits against estimated benefits, FAA cannot verify the accuracy of its analysis or modeling for future runway projects. The extent to which FAA’s operational and policy actions contributed to reduced delays since 2007 is unclear, although they likely resulted in some limited delay reduction benefits. In 2007, the DOT-convened New York Aviation Rulemaking Committee (New York ARC) developed a list of operational improvements targeted at the three New York area airports— Newark, JFK, and LaGuardia. To avoid a repeat of 2007 delays, FAA also instituted hourly limits on the number of scheduled flights at these airports. As we reported in July 2008, the collective benefit of DOT’s and FAA’s actions was expected to be limited. FAA’s hourly schedule limits at Newark, JFK, and LaGuardia likely contributed to some delay reduction benefits beginning in 2008 by reducing the level of peak operations and spreading flights throughout the day. During the summer of 2008, each of these airports experienced an increase in the number of arrivals and a decrease in the percentage of arrivals delayed. For example, the number of arrivals at JFK increased by 2 percent from the summer of 2007 through the summer of 2008, while arrival delays decreased by about 5 percentage points. The effect of these limits in 2009 was likely less pronounced because these three airports experienced fewer flights as a result of the economic downturn. However, without these limits, the number of flights and delays might have increased in 2008 given that airlines proposed to increase their schedules by 19 percent over 2007 levels. See appendix V for more information on how the limits were set and FAA’s analysis of the effect of the limits at the three New York area airports for 2007, 2008, and 2009. According to FAA, as of March 2010, 36 of the 77 operational and procedural initiatives identified by the New York ARC have been “completed,” meaning that these procedures are in place and available for use. However, as we reported in our July 2008 testimony, operational and procedural initiatives are designed to be used only in certain situations. Furthermore, although some of the procedures are available for use, they are not currently being used by the airlines, because some of the procedures were designed to reduce delays when the airports were handling more flights and experiencing higher levels of delay. For example, airlines have opted not to use one procedure that involves routing aircraft around the New York airports, which lengthens the route and could increase the airlines’ fuel and crew costs. According to FAA officials, airlines have opted not to use this procedure, not only because of these additional costs, but also because delays are down with the current reduction in flights, making it unnecessary. FAA has also implemented various systemwide actions that may have had some effect in reducing delays. For example, in 2007, FAA implemented the adaptive compression tool—which identifies unused arrival slots at airports that are due to FAA’s traffic management initiatives, such as initiatives that delay aircraft on the ground, and shifts new flights into these otherwise unused slots. FAA estimated that this tool reduced delays and saved airlines $27 million in 2007. See appendix VI for additional information on DOT’s and FAA’s actions to reduce delays at locations across the national airspace system. Despite fewer delayed flights since 2007, some airports still experienced substantial delays in 2009, according to FAA’s ASPM data. For example, five airports—Newark, LaGuardia, Atlanta, JFK, and San Francisco—had at least a quarter of their arrivals delayed in 2009 (fig. 6). In addition, these delayed arrivals had average delay times of almost an hour or more. Excluding the 10 airports with the highest percentage of delayed flights, the remaining OEP airports had fewer than one in five arrivals delayed, with average delay times of about 53 minutes. The 10 airports with the highest percentage of delayed flights generally had more delays associated with the national aviation system than other OEP airports, according to ASQP data. For example, over 70 percent of Newark’s delays were reported as national aviation system delays, which refer to a broad set of circumstances affecting airport operations, heavy traffic volume, and air traffic control, including nonextreme weather conditions such as wind or fog (fig. 7). In addition, these 10 airports accounted for about half of all the reported national airspace system delays for the 34 OEP airports in 2009, according to DOT data. See appendix IV for airline-reported sources of delay for delayed and canceled flights for the 34 OEP airports. The high percentage of national aviation system delays at these airports likely reflects that these airports are more sensitive to changes in airport capacity because they frequently operate near or exceed their available capacity. For example, the DOT Inspector General reported that at Newark, LaGuardia, JFK, and Philadelphia, airlines scheduled flights above the average capacity in optimal conditions at these airports in the summer of 2007. In further examining the relationship between the level of delay and the relationship of scheduled flights to an airport’s available capacity, we selected the 4 airports with the highest percentage of delayed flights—Newark, LaGuardia, JFK, and Atlanta—along with 2 airports that are among the 34 OEP airports with the lowest percentage of delayed flights—Chicago Midway and Lambert-St. Louis International (St. Louis)— and analyzed data on the number of scheduled flights and available capacity at these 6 airports. We found that all 4 of the delay-prone airports had flights scheduled above the airports’ capacity levels for at least 4 hours of the day, while the 2 airports with lower levels of delay never had the number of scheduled flights exceeding capacity. Operating close to capacity becomes especially problematic when weather conditions temporarily diminish the capacity at an airport. In particular, while flights to and from an airport operating close to or exceeding capacity might become very delayed in inclement weather conditions, flights to and from another airport that has unused capacity may not be delayed by a similar weather event. While the flight delay data from DOT and FAA data sources previously discussed serve as the primary source of air travel information for consumers, OPSNET helps the agency understand which FAA facilities are experiencing delays, why the delays are occurring (e.g., weather or heavy traffic volume), and uniquely, which facilities are the source of that delay. Unlike the other databases, which measure delays against airline schedules, OPSNET database collects data on delays that occur solely while flights are under FAA control. For example, a flight would be recorded as delayed in OPSNET if it is held on the ground at the departure airport for more than 15 minutes because of an FAA facility instituting a traffic management initiative in response to weather conditions, increased traffic volume, or other circumstances. FAA measures delays within the air traffic control system to assess its performance because an inefficient air traffic control system contributes to higher levels of delayed flights. As figure 8 shows, many of the delay-prone airports that we identified earlier in the report based on our analysis of arrival delays also experience the most departure delays, according to OPSNET. In OPSNET terminology, these delays are called occurred-at delays because they represent delays that happened at the given airport. In addition to capturing where the delay occurred (as shown above), OPSNET provides information on what facility the delay was attributed to—that is, which facility instituted a traffic management initiative that resulted in flights being delayed. If, for example, a flight departing Atlanta was delayed because of weather problems in Atlanta, Atlanta would be recorded as both the occurred-at facility and the attributed-to facility in OPSNET. However, if fog in San Francisco delays a flight leaving Minneapolis bound for San Francisco, Minneapolis is the occurred-at facility, but San Francisco is the attributed-to facility. This concept of assigning attribution for delays is different than the notion of “propagated delay,” in which a delayed flight early in the day may cause delays to flights later in the day because of a late-arriving aircraft or crew. Instead, delay that is attributed to a facility in OPSNET relates only to a given flight segment and is determined to be associated with the airport or other air traffic control facility that had a traffic management initiative in place that held flights at a particular location. As figure 9 shows, almost half—49 percent—of all departure delays occurring at the 34 OEP airports were attributed to just 3 airports— Atlanta, Newark, and La Guardia, according our analysis of OPSNET. However, these 3 airports accounted for only 13 percent of departures among these 34 airports in 2009. In addition, 7 airports and their associated TRACONs were the source of approximately 80 percent of all departure delays captured in OPSNET in 2009 (see fig. 10). Figure 10 also shows that in the case of the combined New York airports as well as for 3 of the 4 remaining airports (the exception is Atlanta), a majority of the departure delays that were attributed to these airports actually occurred at—or were experienced at—other airports. For example, Philadelphia was the source of over 26,000 delayed departures throughout the national airspace system in 2009, but fewer than 7,500 of these delays (or 28.2 percent) occurred at Philadelphia. Further analysis (see pie chart in fig. 10) shows that for all of the departure delays among the 34 OEP airports that occurred at an airport other than the airport that generated the delay, 83 percent were attributed to these 7 airports. FAA has identified these same 7 airports as among the most delayed airports in the system in need of further monitoring for possible changes in airline schedules and potential delays—a process that we discuss later in this report. FAA’s actions have the potential to reduce delays in the next 2 to 3 years and are generally being implemented at airports that experience and contribute substantial delays to the system, including the 7 airports that are the source of a majority of the delays in the system (Newark, LaGuardia, Atlanta, JFK, Philadelphia, Chicago O’Hare, and San Francisco). While FAA’s long-term solution to expanding capacity and reducing delays is NextGen improvements that will not be fully implemented until 2025, we used FAA’s Flight Plan and NextGen Implementation Plan to identify several actions that are slated to be implemented in the next 2 to 3 years, have the potential to help meet short- term capacity needs, and improve the operational performance of the U.S. aviation system. These actions include implementing near-term elements of NextGen, constructing runways, implementing a new airspace structure for the airports serving the New York/New Jersey/Philadelphia metropolitan area, and revising air traffic control procedures. More detailed information on the actions and their locations can be found in appendix VI. According to FAA, the purpose of many of these actions is not only to reduce delays, but just as importantly, they can also improve safety, increase capacity, and reduce fuel burn. Many of the actions for reducing delays over the next 2 to 3 years are being implemented at some of the most congested airports in the system. For example, Actions that FAA has in place or planned for the New York area airports— such as the New York ARC initiatives, the New York/New Jersey/Philadelphia airspace redesign, and hourly schedule limits—are being implemented to help address widespread delays at the congested New York airports. The remaining ARC initiatives and other actions to reduce delays at the New York airports were recently incorporated into the New York Area Delay Reduction Plan, which FAA expects to update monthly. The agency continues to maintain the schedule limits, which were designed to limit airline overscheduling and limit delays in the New York area to below the levels experienced in summer 2007. Additionally, FAA issued an order in January 2009 outlining its plans to reduce the number of hourly scheduled flights at LaGuardia from 75 to 71 through voluntary reductions and retirements of slots by the airlines. FAA has also continued to implement various air traffic management improvements and begun implementation of NextGen procedures and technologies, many of which are expected to be implemented at the most congested airports. The RTCA NextGen Mid-Term Implementation Task Force recommended that FAA target key airports when implementing NextGen capabilities between now and 2018. FAA used these recommendations to help develop its 2010 NextGen Implementation Plan, which includes actions to be implemented in the next 2 to 3 years, including additional Area Navigation (RNAV) and Required Navigation Performance (RNP) procedures, often called performance-based navigation procedures. In response to the RTCA recommendations, FAA plans to focus on increasing the use of performance-based navigation at some of the key airports identified by the task force. According to FAA air traffic officials, an automated metering tool used to help manage arrival aircraft—Traffic Management Advisor (TMA)—has contributed to more efficient departure and arrival performance at several OEP airports, including Atlanta and Newark. To help reduce delays at San Francisco and other busy airports, FAA has also tested tailored arrival procedures, which allow the pilot to fly the most efficient descent into the arrival airport. Over the next 2 to 3 years, Chicago O’ Hare, JFK, Charlotte/Douglas International (Charlotte), and Portland International (Portland) will continue to pursue infrastructure projects to increase the capacity of their airports and surrounding airspace. Chicago O’Hare—one of the airports that contributes substantial delays to the national airspace system—is scheduled to open another new runway in 2012 that is expected to provide the airport with the potential to accommodate as many as 30,900 additional flights annually. At Charlotte, a new runway opened in February 2010 that has the potential to accommodate as many as 80,000 additional flights annually. Later this year, Portland is expected to complete a runway extension, although benefits for this project are not estimated. Airport infrastructure projects such as these will help reduce delays at these airports and should also help decrease delays elsewhere in the system. Many delay reduction actions face implementation challenges that may limit their ability to reduce delays in the next 2 to 3 years. For example, according to officials, one challenge FAA faces in implementing the remaining New York ARC initiatives is that airlines do not have a current need for or interest in using some of the procedures because of recent declines in air traffic. Implementation may be difficult for other air traffic management tools—such as TMA—because, according to the DOT Inspector General, they represent a significant change in how air traffic controllers manage traffic. Effective training will be required to ensure air traffic managers and controllers become familiar with and gain confidence in newly automated functions. However, TMA has been deployed and is currently being used at many airports, including Newark, LaGuardia, and JFK. Some airline officials noted that TMA implementation has been beneficial, but there have been some implementation challenges because of the transition to an automated system. While introducing new RNAV and RNP procedures could help reduce delays in the next 2 to 3 years, as we have previously reported, developing these procedures in a timely manner is a challenge. In the New York area, for example, some of these procedures cannot be implemented until the New York/New Jersey/Philadelphia airspace redesign is completed, which is currently behind schedule. FAA did not fully account for future use of new technology such as RNAV in its analysis, so the New York/New Jersey/Philadelphia airspace redesign has to be completed in order to implement new performance-based navigation procedures in the study area. In addition, most procedures that FAA has implemented are overlays of existing routes rather than new procedures that allow more direct flights. Overlays can be deployed more quickly and do not involve an extensive environmental review, but they do not maximize the delay reduction benefits of RNAV and RNP. FAA’s goals for RNAV and RNP focus on the number of procedures produced, not whether they are new routes or the extent to which they provide benefits or are used. For example, FAA believes that it can annually develop about 50 RNAV and RNP procedures, 50 RNAV routes, and 50 RNP approaches. Given that FAA plans to implement a total of 2,000 to 4,000 RNAV and RNP arrival and departure procedures alone, it is clear that only a limited number of new procedures—which could provide delay reduction benefits—will be implemented in the next 2 to 3 years. Implementation of NextGen also faces several challenges, including operating in a mixed equipage environment, addressing environmental issues, and changing FAA’s culture. For example, it is difficult for air traffic controllers to manage aircraft equipped with varying NextGen capabilities, particularly in busy areas, because controllers would have to use different procedures depending on the level of equipage. It is also difficult for FAA to complete all the required environmental reviews quickly because any time an airspace redesign or new procedure changes the noise footprint around an airport, an environmental review is initiated under the National Environmental Policy Act (NEPA). FAA also faces cultural and organizational challenges in integrating and coordinating activities across multiple lines of business. Sustaining a high level of involvement and collaboration with stakeholders—including operators, air traffic controllers, and others—will also be necessary to ensure progress. More recently, software and other technical issues experienced at test sites have delayed systemwide implementation of core NextGen functionality. FAA has various tools for measuring and analyzing how its actions might reduce delays, including establishing an on-time performance target, estimating delay reduction benefits for NextGen and some individual initiatives, and regularly monitoring system performance across the national airspace system and at individual airports. FAA measures improvements in delays through its NAS on-time performance target: FAA established an 88 percent national airspace system (NAS) on-time arrival performance target to measure how its actions help meet its Flight Plan goal of increasing the reliability and on- time performance of the airlines. According to FAA, this performance target provides information on FAA’s ability to provide air traffic control services to the airlines and is set based on 3 years of historical trending data. Because DOT’s ASQP data are used for this target and contain flight delays caused by incidents outside FAA’s control—such as extreme weather or carrier-caused delay—FAA removes such delays not attributable to the agency to provide a more accurate method of measuring FAA’s performance. Even with these modifications to the data, FAA notes that the actual measure can still be influenced by factors such as airline schedules or nonextreme weather. FAA analyzes the delay reduction benefits of some actions: FAA has modeled and estimated total delay reduction benefits from NextGen. In addition to benefits from safety, fuel savings, and increased capacity, FAA estimates that, in aggregate, planned NextGen technologies—including the New York/New Jersey/Philadelphia airspace redesign and RNAV and RNP routes—and planned runway improvements will reduce delays by about 21 percent by 2019 as measured against doing nothing at all (fig. 11). In particular, given the estimated growth in traffic, FAA estimates that NextGen and other planned efforts will keep delays from growing as fast as they would without them, but delays are still expected to grow from today’s levels. According to FAA’s model simulations, total delay minutes are predicted to double from current levels, even when assuming all planned NextGen and other runway improvements occur. At the airport level, FAA provided us with additional results from its simulations that suggest that, even after taking into consideration the benefits of new runways and NextGen technologies, flights at several airports may experience higher average delays per flight in 2020 than experienced today. FAA has also analyzed delay reduction benefits for elements of some major projects and individual actions, though we did not verify or evaluate these analyses or estimates. For example, postimplementation analysis for the first phase of the New York/New Jersey/Philadelphia airspace redesign showed that both Newark and Philadelphia airports experienced increases in the number of departures during times when the new departure headings were used, resulting in an estimated decrease of almost 1 minute of taxi time and a 2.5 percent decrease in the time between when the aircraft pushes back from the gate to when it takes off from the airport— which is referred to as “out to off time”—during the morning departure push at Newark. FAA also assessed capacity and delay reduction benefits for some air traffic management improvements. For example, FAA estimates that the implementation of TMA improved FAA’s ability to manage aircraft, resulting in capacity increases of 3 to 5 percent. As part of the review process for the New York ARC initiatives, FAA officials selected some of the ongoing and completed initiatives for further analysis based on their potential to reduce delays. For example, FAA conducted a study of the simultaneous instrument approaches at JFK that showed an increase in arrival capacity of 12 flights per hour. According to FAA officials, it is difficult to isolate the overall benefit of an individual initiative given the complexity of assessing all the actions in place and all of the factors affecting the system at any given time. FAA monitors system performance: FAA also monitors airport and system delays using tools, such as targeted analysis and performance dashboards, that track operational performance on a daily basis. This routine monitoring allows officials to try to assess how a given event may have affected performance. FAA officials recently added data to its dashboards to enable users to compare current performance with that for previous days, months, or years and to provide additional insight on performance trends. Also, FAA recently began to implement a process for monitoring airport performance. In response to peak summer delays in 2007, FAA officials began using airline schedules to estimate delay trends at the OEP airports and identify airports that may experience significant delays in the next 6 to 12 months. If an airport is expected to experience significant delays—that is, aircraft waiting to depart for more than 5 minutes—FAA would then evaluate whether a congestion action team should be formed to develop actions in response to these potential delays. However, because of the recent decline in the number of flights systemwide, FAA has yet to take any new actions based on this monitoring. Although FAA’s target of 88 percent on-time arrival performance provides a measure of the agency’s overall goal to provide efficient air traffic control services, it masks the wide variation in airport performance, making it difficult to understand how individual airport performance relates to the overall target. For example, in fiscal year 2009, Newark had an on-time arrival rate of only 72 percent, while St. Louis easily exceeded the target with 95 percent on-time performance. Despite this variability in performance, FAA has not established airport-specific targets for on-time performance. FAA officials noted that they are trying to develop airport- specific on-time performance targets, but efforts in developing these targets are in the very early stages, and they do not currently have plans to make these targets publicly available or hold FAA officials at the local airport or national level accountable for achieving these targets. The absence of performance targets for individual airports hinders FAA, aviation stakeholders, and the public from understanding a desired level of on-time performance for individual airports and results in FAA lacking a performance standard by which it can prioritize and demonstrate how its actions reduce delays at the most congested airports and throughout the system. For example, as previously noted, FAA’s implementation of new departure headings resulted in performance improvements at Philadelphia and Newark, according to the MITRE analysis. Yet those improvements lack a performance standard against which FAA might prioritize its actions and determine if the improvements helped meet or exceed, or still fall short of, the overall targeted level of performance for these airports or how they affected the overall on-time performance goal. For example, reducing delays at the airports that currently impose approximately 80 percent of all departure delays within the air traffic control system could not only have a measurable benefit at these airports, but could also improve performance of the overall national airspace system. Furthermore, although FAA’s analyses of delay reduction benefits demonstrate improvements at various airports, it remains unclear whether further actions are required to achieve a targeted level of performance at these airports since targeted levels of airport performance have not been established. As part of its NextGen Mid-Term Implementation Task Force recommendations, RTCA is encouraging FAA to move away from traditional national deployments of new technologies to an airport-centric approach that deploys solutions at key airports and for large metropolitan areas where problems with congestion and delay are most acute. Airport- specific performance targets could help in measuring the extent to which FAA’s airport-focused actions are helping to improve performance or whether additional actions are needed to address delays at the most congested airports. Moreover, although NextGen will keep delays at many airports from getting worse than would be expected without NextGen, FAA’s NextGen modeling indicates that even if all ongoing and planned NextGen technologies are implemented, a few airports, such as Atlanta, Washington Dulles, and possibly Philadelphia, may not be able to meet the projected increases in demand, and if market forces do not dampen that demand, additional actions may be required at these airports. However, without airport-specific targets, FAA cannot determine what additional actions might be required to achieve a targeted level of performance at these airports. Over the next 2 to 3 years, FAA has numerous actions planned or under way that are expected to increase capacity and improve the performance of the overall aviation system. Although these actions may reduce delays and help FAA achieve its overall on-time performance goal, FAA’s ability to prioritize these actions and communicate their benefits is inhibited by the absence of individual airport on-time performance targets. Identifying performance targets for individual airports and how these targets relate to the overall agency goal will provide a standard by which FAA can measure and prioritize its actions to reduce delays at these airports and overall. This is particularly important in understanding the extent to which FAA’s actions are addressing delays at the 7 airports—Newark, LaGuardia, Atlanta, JFK, San Francisco, Chicago O’Hare, and Philadelphia—that are currently responsible for about 80 percent of the delays across the air traffic control system. Although airport-specific on-time performance targets should not be the only measure of FAA’s performance in reducing delays in the system, by setting these targets, FAA may be motivated to better focus its actions at these airports, resulting in reduced delays not only at these airports but also at other airports in the national airspace system. Airport-specific goals would also help FAA better communicate how actions at the airport and national levels contribute to the agency’s overall goals, improve airport performance, and demonstrate how its actions are affecting delays. Additionally, even with NextGen, delays at some of the most congested airports are expected to continue and could get worse, requiring FAA to consider additional policy actions to maintain airport performance. Airport-specific goals could help FAA identify and communicate what additional actions might be required to achieve a targeted level of performance at these airports. We recommend that the Secretary of Transportation direct the Administrator of FAA to develop and make public airport-specific on-time performance targets, particularly for the most congested airports that impose delays throughout the air traffic control system, to better prioritize FAA’s actions to reduce delays and demonstrate benefits of those actions. We provided a draft of this report to DOT for its review and comment. DOT and FAA officials provided technical comments that we incorporated as appropriate. In addition, in e-mailed comments, an FAA official reiterated that the agency has been working to develop and implement airport-specific performance targets, but that this process remains ongoing given the complex nature of compiling historical data and airport-specific performance information to create baseline targets. The official also noted that airport-specific on-time performance targets are one of the many tools that FAA can use to manage and measure delays at the airport level and systemwide and that the agency continues to identify ways to improve how it measures performance. For example, FAA plans to use new radar and airport surface detection data to help refine its causal delay data. While we agree that these measures could help FAA further understand delays, we continue to believe that airport-specific on-time performance targets could help FAA demonstrate how its actions are affecting delays at individual airports and throughout the national airspace system, but they could also help FAA, aviation stakeholders, and the public understand the desired level of airport performance. Furthermore, establishing airport- specific targets in addition to the agency’s overall on-time performance target would help FAA focus its actions on those airports where improvements could result in the greatest impact and communicate to stakeholders how its actions relate to its goals. We are sending copies of this report to the Secretary of Transportation and the Administrator of the Federal Aviation Administration. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. In this report, we examined the extent to which (1) delays in the U.S. national aviation system have changed since 2007 and the factors contributing to these changes, and (2) actions by the Department of Transportation (DOT) and the Federal Aviation Administration (FAA) are expected to reduce delays in the next 2 to 3 years. To determine how delays have changed, we analyzed DOT and FAA data on U.S. passenger airline flight delays by airport and for the entire aviation system through 2009. Using DOT’s Airline Service Quality Performance (ASQP) data, we analyzed systemwide trends in flight delays, including cancellations, diversions, long tarmac delays, and average delay minutes, for calendar years 2000 through 2009. Using FAA’s Aviation System Performance Metrics (ASPM) data, we analyzed airport-specific trends in the number of total flights, delayed flights, and delay time for 34 of the 35 airports in FAA’s Operational Evolution Partnership (OEP) program for calendar years 2007 through 2009. We focused on these 34 OEP airports because they serve major metropolitan areas located in the continental United States and handled more than 70 percent of passengers in the system in 2008; additionally, much of the current delays in air traffic can be traced to inadequate capacity relative to demand at these airports, according to FAA. We also analyzed DOT’s ASQP data on airline-reported sources of delayed and canceled flights for these 34 airports for calendar year 2009. To assess the extent to which these 34 airports experienced and contributed delays to the aviation system, we analyzed calendar year 2009 data from FAA’s Operations Network (OPSNET), which measures departure delays, airborne delays, and delays resulting from traffic management initiatives taken by FAA in response to weather conditions, increased traffic volume, runway conditions, equipment outages, and other affecting conditions. Our analysis included data from the OEP airports (excluding Honolulu) and their associated terminal radar approach control facilities (TRACON). Since 16 location identifiers are used for a combination of airports and TRACONs, resulting in combined data, we worked with FAA to determine how to identify the number of departures and departure delays to attribute to each individual airport and TRACON in our universe. To separate out these data, we examined the different categories of OPSNET delays: departure delays (flights incurring a delay at the origin airport prior to departure), airborne delays (flights held en route), and two categories of traffic management delays—delays occurring at one facility resulting from a traffic management initiative instituted by another facility (“traffic management from” delays) and delays charged to the facility instituting the traffic management initiative, which may occur at another facility in the system (“traffic management to” delays). Since TRACONs handle airborne flights only and airports handle flights preparing for takeoff or landing, we allocated all airborne delays to the TRACONs and all departure and traffic management from delays to the airport for these combined facilities. In separating out the traffic management to delays, we allocated all of these delays to the OEP airport, unless the delay occurred at another airport associated with that TRACON—in which case, we allocated those delays to the TRACON. Our analysis focused on departures, departure delays, and both categories of traffic management delays because the majority of delays recorded in OPSNET occur before an aircraft takes off from an airport and therefore are captured in these delay categories. Once we separated the delay for each air traffic control tower and TRACON, we calculated the following measures for the facilities in our universe: the number of departures at a facility as a percentage of the total; percentage of delayed departures occurring at each facility; and percentage of delayed departures charged, or attributed to each facility and where that delay occurred. Our analysis of OSPNET includes only calendar year 2009 because in recent years, FAA has made changes in how data are collected for OPSNET, including automating the collection of its data in fiscal year 2008 and capturing additional delay categories in fiscal year 2009, making it difficult to do year-over-year comparisons of these data. To assess the reliability of ASQP, ASPM, and OPSNET data, we (1) reviewed existing documentation related to the data sources, (2) electronically tested the data to identify obvious problems with completeness or accuracy, and (3) interviewed knowledgeable agency officials about the data. We determined that the data were sufficiently reliable for the purposes of this report. To determine the factors affecting changes in flight delays since 2007, we reviewed relevant FAA reports; interviewed DOT, FAA, airport, and airline officials and industry experts; and examined estimated delay reduction benefits of actions, when available. To understand the relationship between the number of flights and delays, we performed a simple correlation analysis between the number of monthly arrivals and delayed arrivals from calendar years 2000 through 2009 for the OEP airports (excluding Honolulu). See appendix III for additional information on this analysis. To determine the extent to which DOT’s and FAA’s actions reduced delays since 2007, we reviewed FAA analysis of estimated delay reduction benefits of its actions, including runway projects and other capacity improvements, and interviewed agency officials about these analyses. Additionally, using FAA data on Chicago O’Hare’s called rate (a measure of capacity reflecting the number of aircraft that an airport can accommodate within a 15-minute period), we determined the extent to which capacity had increased after the new runway was opened. To assess the effect of the hourly limits on scheduled arrivals and departures at LaGuardia, John F. Kennedy International (JFK), and Newark Liberty International airports, we examined analysis done by the MITRE Corporation on airline schedules before and after the schedule limits were established. See appendix V for more information on this analysis. To identify DOT’s and FAA’s ongoing and planned actions to reduce delays in the next 2 to 3 years, we analyzed key FAA documents, including the agency’s strategic plan (referred to as the Flight Plan), the NextGen Implementation Plan, FAA’s Response to Recommendations of the RTCA NextGen Mid-Term Implementation Task Force, and the New York Aviation Rulemaking Committee Report. In assessing these documents, we identified a set of capacity improvements and demand management policies with the potential to reduce delays by 2013. FAA has many ongoing and planned initiatives—such as longer-term Next Generation Air Transportation System (NextGen) procedures and technologies—that could also reduce delays, but these actions are not included in our discussion because they are not expected to realize delay reduction benefits in the next 2 to 3 years. These actions to reduce delays are available or planned at various OEP airports, but we did not assess the extent to which they are being used at a given location. To determine the extent to which DOT and FAA actions are being implemented at the most congested airports, we reviewed related reports and studies, including FAA’s 2009 Performance and Accountability Report, the RTCA NextGen Mid-Term Implementation Task Force Report, and FAA’s Capacity Needs in the National Airspace System, 2007-2025 (FACT 2), and interviewed airport officials at some of these airports and FAA officials at both the national and local airport levels. To determine the status of DOT’s and FAA’s actions to reduce delays and their potential to reduce delays, we interviewed officials in FAA’s Air Traffic Organization; Office of Aviation Policy, Planning and Environment; Office of Airport Planning and Programming; and local airport officials. To gain an understanding of aviation stakeholder perspectives on the expected impact of DOT’s and FAA’s actions in the next 2 to 3 years, we spoke with industry and academic experts, airport and airline officials, the DOT Inspector General, the Air Transport Association, the Airports Council International-North America, the National Air Traffic Controllers Association, the National Business Aviation Association, the Air Carrier Association of America, and the Regional Airline Association. To identify the extent to which FAA has modeled or assessed the delay reduction impact of its actions, including NextGen, we interviewed officials from MITRE, FAA’s Performance Analysis and Strategy Office, and FAA’s Air Traffic Organization NextGen offices. FAA officials also provided information based on model simulations that examine future benefits of NextGen technologies. In particular, we received analysis of expected delay minutes for the OEP airports in future years under various assumptions—a baseline scenario that estimates the delays that may occur if no improvements are made to the system; a runway scenario that estimates the delays that may occur if only runway improvements are made over the next 10 years, but no NextGen air traffic management improvements; and the NextGen case that estimates the delays that may occur if planned runway improvements and NextGen technologies and procedures are implemented. As part of the assumptions underlying these analyses, FAA also provided us with the extent to which future demand growth is “trimmed” under these scenarios as a means of limiting future traffic projections to reflect anticipated airport infrastructure constraints. While we reviewed some of FAA’s assumptions and analyses, we did not verify the accuracy of the models. To identify how FAA measures whether its actions contribute to changes in delays, we reviewed FAA’s Flight Plan and related documents to determine how FAA measures its performance in achieving its goal of increasing the reliability and on-time performance of the airlines. We also interviewed FAA officials on the agency’s performance targets and any planned improvements to these targets. Finally, we reviewed previous GAO reports, including our prior work on aviation infrastructure, NextGen, aviation congestion, and regional airport planning. We conducted this performance audit from May 2009 to May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A tarmac delay occurs when a flight is away from the gate and delayed either during taxi-out: the time between when a flight departs the gate at the origin airport and when it lifts off from that airport (i.e., wheels-off); during taxi-in: the time between a flight touching down at its destination airport (wheels-on) and arriving at the gate; prior to cancellation: flight left the gate but was canceled at the origin during a diversion: the tarmac time experienced at an airport other than the destination airport; or as a result of a multiple gate departure: the flight left the gate, then returned, and then left again; the tarmac time is the time before the return to the gate. Figure 12 shows trends in tarmac delays greater than 3 hours from calendar years 2000 through 2009. Table 3 shows the breakdown of tarmac delays by month and phase of flight since October 2008, when these more detailed data were first collected. To corroborate FAA and stakeholder views on the relationship between the recent reductions in flights and declines in delays, we performed a correlation analysis between the number of total arrivals and delayed arrivals. Our correlation analysis yielded a correlation coefficient that captures only the relationship between the number of arrivals and arrival delays at the 34 OEP airports (excluding Honolulu). Coefficient variables take a value between negative 1 and 1. A correlation coefficient of zero would indicate that there was no relationship between the variables. A correlation coefficient close to 1 would indicate a strong positive relationship, while a correlation coefficient close to negative 1 would indicate a strong negative relationship. Our results showed a correlation coefficient of 0.72, indicating a significant relationship between arrivals and arrival delays. Although this result likely indicates that arrival delays will rise with increases in arrivals, for several reasons, it should not be viewed as highly predictive of the exact pattern with which delays will track arrivals. Many other factors—that we do not account for—also affect delays at a given airport or set of airports and thus affect the measured relationship between the number of flights and delays. For example, how close the number of flights is to the airport’s capacity—i.e., the number of flights an airport can handle in a given period of time—is a key factor underlying the relationship between the number of flights and delays. In particular, the relationship between the number of flights and delays is likely not stable in the sense that as the number of flights grows and becomes closer to the capacity of an airport, the influence of additional flights on delays becomes greater. For example, in addition to looking at the relationship for all airports, we also performed a correlation for all airports that were among the 10 airports with the highest percentage of delayed flights in any year since 2007. In total, there were 15 airports used for this most delayed airports analysis. Our analysis yielded a correlation coefficient of 0.79, indicating that the most delay-prone airports—which likely handle a number of flights closer to their capacity than others—experience a stronger relationship between the level of flights and delays than airports that have more available capacity. Additionally, a host of factors—such as airport infrastructure (e.g., available airport gates, taxiways, and runways)—influence an airport’s capacity at a given time and, therefore, how many flights an airport can handle. Capacity can be a changing value hour to hour or day to day, depending on such elements as weather, the mix of aircraft used at the airport, and air traffic procedures. Airport projects that provide greater capacity—such a new runway, taxiway improvements, or additional gates—will enable more flights with fewer impacts on delays and therefore also affect the relationship between the number of flights and delays. Also, the level of delays at one airport or throughout the national airspace system can affect delays elsewhere. For example, FAA officials provided an analysis to us suggesting that as the number of flights, and therefore delays, rapidly grew at the John F. Kennedy (JFK) airport after 2007, other airports—that did not see a significant rise in the number of flights they handled—had measurably worse delays. Finally, how airlines use airport infrastructure can affect the relationship between the number of flights and delays. Notably, FAA officials told us that airlines scheduling large numbers of flights at the same time (e.g., airline peaking) at the busy airports is a key factor that affects the relationship between the number of flights and delays. That is, a given number of flights will likely result in more delays if there are strong peaks in the number of flights scheduled that tax the airport’s capacity at certain times of the day rather than a more evenly spaced schedule of flights across the entire day. ppendix IV: Airline-Reported Sources of A Delays for Dela Ranked b Percey Airports with the Highest ntage of Flight Delays, 2009 Ntionl Avition Stem DOT collects delay data in one of five causal categories: national aviation system (i.e., a broad set of circumstances affecting airline flights, such as nonextreme weather that slows down the system, but does not prevent flying), late-arriving aircraft (i.e., a previous flight using the same aircraft arrived late, causing the subsequent flight to depart late), airline (i.e., any delay that was within the control of the airlines, such as aircraft cleaning, baggage loading, crew issues, or maintenance), extreme weather (i.e., serious weather conditions that prevent the operation of a flight, such as tornadoes, snowstorms, or hurricanes), and security (i.e., evacuation of an airport, reboarding because of a security breach, and long lines at the passenger screening areas). Security delays do not appear this graphic because they make up less than 1 percent of the delays at these airports. Severe wether DOT collects cancellation causal data in one of four categories: national aviation system (i.e., a broad set of circumstances affecting airline flights, such as nonextreme weather that slows down the system, but does not prevent flying), airline (i.e., any delay that was within the control of the airlines, such as aircraft cleaning, baggage loading, crew issues, or maintenance), extreme weather (i.e., serious weather conditions that prevent the operation of a flight, such as tornadoes, snowstorms, or hurricanes), and security (i.e., evacuation of an airport, reboarding because of a security breach, and long lines at the passenger screening areas). Security delays do not appear on this graphic because they make up less than 1 percent of the delays at these airports. In 2008, FAA and its federally funded research and development center, the MITRE Corporation’s Center for Advanced Aviation System Development, undertook an analysis to set limits on scheduled operations (often called slots) for Newark and JFK airports in the New York area in order to address congestion and delay at these airports. Because the level of operations and associated delays had increased during 2006 and 2007 at JFK, and airlines were indicating further increases in planned operations for the summer of 2008, FAA determined that schedule limits needed to be applied to that airport. While LaGuardia already had a schedule cap in place, Newark airport did not, and FAA decided to also set a cap for Newark so that a limit on operations at JFK did not lead to increased operations and delays at Newark. From a performance perspective, the goal in setting the level of caps at these airports was to reduce average delays at JFK by about 15 percent compared with their 2007 level, and to keep delays at Newark from worsening over their 2007 level. To determine how schedule limitations would be applied, FAA and MITRE used a model that estimated the level of delay associated with various levels of operations at both JFK and Newark airports. The first key model input is a level of demand on a particular busy day in August 2007. The source of that data is airlines’ scheduled departure and arrival operations at the two airports for that day according to the Official Airline Guide (OAG). In addition to scheduled operations, each day the airports also service nonscheduled operations (i.e., operations not in the OAG). To properly capture the total demand levels at these airports, nonscheduled operations are added as part of the demand input to the model. Thus the “demand” input is a profile of all scheduled and nonscheduled operations across that day. The second key model input is airport capacity—the number of operations an airport can handle in any given time period. The level of airport capacity is not a constant; it varies on an ongoing basis with runway configuration, weather, and other factors. For the analysis, airport capacities for each hour across all weekdays over many months were determined. As an input, the model used what is called adjusted capacities. Adjusted capacities are based upon an airport’s called rates— the projected level of operations the airport could handle based on conditions at the airport at that time, and actual throughput—the number of aircraft that landed and departed. With few exceptions, the adjusted capacities in the model were set at the maximum of actual throughput or called rate for any specific hour. For each of the airports, multiple iterations of the demand profile were run against the adjusted capacities, and the model provided “predicted delays.” These predicted delays were compared with actual delays that had occurred at those airports across varied combinations of operations and capacity. FAA and MITRE found that the model’s predicted delays followed patterns that were in line with the patterns of actual delays. That is, the manner in which the predicted level of delay responded to changes in operations and/or capacity in the model paralleled the patterns of actual delay response to those factors. These parallels helped to validate the model’s structure. The results of the model were used in part to determine the limits on scheduled operations by evaluating the amount of delay that would be associated with varying levels of operations at each airport. In particular, MITRE staff provided model results that indicated, for sequentially lower levels of hourly operations, the level of delay that could be expected across the day at each airport. For both JFK and Newark airports, this exercise resulted in scheduling limitations set at 81 operations per hour, with some hourly exceptions, as this level of operations was predicted to result in the target level of delay for each of the airports. While LaGuardia already had a schedule cap in place, FAA and MITRE used this same approach to model estimated levels of delay at various levels of operations. More recently, this analysis was used in issuing a new order decreasing the limit of scheduled hourly operations at LaGuardia from 75 to 71. Existing flights were not affected, but slots that are returned or withdrawn by FAA will be limited to the 71 per hour limit. Figures 15 through 17 illustrate how the schedule limits affected hourly operations at the three New York area airports, using a busy day in August—typically a very busy month—to be representative of the summer schedules. More specifically, the figures show how airlines scheduled operations throughout the day in 2007, the schedule they planned to submit for 2008 without caps—or the “wish list”—and the actual operations scheduled in 2008 and 2009 with the caps in place. The 2008 wish list data are based on the proposed schedules submitted by the airlines during the negotiations and discussions held to determine the limits on scheduled operations at the airports. The JFK and Newark figures show that peak period operations have smoothed and fallen since the caps were put in place. This change in peak hour operations has enabled the airports to provide more throughput with less impact on delay than a more peaked profile of operations would have provided. Other factors may also have had an impact on hourly operations at the three airports (i.e., the economic downturn has led airlines to reduce their scheduled operations below the scheduling limits during some hours at these airports). For Newark, the decline in peak hour operations is most significant when comparing the actual 2008 schedule with the airlines’ 2008 wish list, especially during the busy afternoon and evening period. Because LaGuardia has capped operations for many years, and the orders have roughly maintained the same caps, the airport has experienced significantly less variation in hourly operations over the last 3 years. In addition, the carriers never submitted a 2008 wish list because the airport was already capped. Our report examined DOT and FAA actions to reduce delays over the next 2 to 3 years. Table 4 describes how each action could help reduce delays and demonstrates that most of the ongoing and planned actions are capacity improvements designed to address flight delays by enhancing and expanding existing capacity. As table 5 demonstrates, these actions generally are being implemented at the most delayed airports in the country. For example, DOT convened a special aviation rulemaking committee (New York ARC) in the fall of 2007 specifically to address delays and other airline service issues in the New York metropolitan area, and one of the committee’s working groups assessed 77 operational improvement initiatives for the New York area. In addition to being implemented at the most delayed airports, many of these actions are also available at other OEP airports across the national airspace system. These actions are available or planned at various locations, but we did not assess the extent to which they are being used at a given location. For example, we did not assess the extent to which RNAV and RNP procedures are in use at these airports. In addition to the contact named above, Paul Aussendorf (Assistant Director), Amy Abramowitz, Lauren Calhoun, Colin Fallon, Heather Krause, John Mingus, Sara Ann Moessbauer, Josh Ormond, Melissa Swearingen, and Maria Wallace made key contributions to this report.
Flight delays have beset the U.S. national airspace system. In 2007, more than one-quarter of all flights either arrived late or were canceled across the system, according to the Department of Transportation (DOT). DOT and its operating agency, the Federal Aviation Administration (FAA), are making substantial investments in transforming to a new air traffic control system--the Next Generation Air Transportation System (NextGen)--a system that is expected to reduce delays over the next decade. This requested report explains the extent to which (1) flight delays in the U.S. national airspace system have changed since 2007 and the contributing factors to these changes, and (2) actions by DOT and FAA are expected to reduce delays in the next 2 to 3 years. We analyzed DOT and FAA data for FAA's Operational Evolution Partnership (OEP) airports because they are in major metropolitan areas, serving over 70 percent of passengers in the system. We reviewed agency documents and interviewed DOT, FAA, airport, and airline officials and aviation industry experts. Flight delays have declined since 2007, largely because fewer flights have been scheduled by airlines as a result of the economic downturn, but some airports still experience and contribute substantial delays to the system. The percentage of flights that were delayed--that is, arrived at least 15 minutes after their scheduled time or were canceled or diverted--decreased 6 percentage points from 2007 to 2009, according to DOT data. Even with this decrease in delays, during 2009, at least one in four U.S. passenger flights arrived late at 5 airports--Newark Liberty International (Newark), LaGuardia, John F. Kennedy (JFK), Atlanta Hartsfield International (Atlanta), and San Francisco International--and these late arrivals had an average delay time of almost an hour or more. In addition to these airports having the highest percentage of flights with delayed arrivals, these 5 airports, along with Chicago O'Hare International and Philadelphia International (Philadelphia), were also the source of most of the departure delays within FAA's air traffic control system. FAA measures delays within the air traffic control system to assess its performance because an inefficient air traffic control system contributes to higher levels of delayed flights. An FAA air traffic control tower or other facility may delay flights departing from or destined to an airport because of inclement weather or heavy traffic volume at that airport. In 2009, of the 34 OEP airports in GAO's analysis, about 80 percent of departure delays occurring at airports across the national airspace system were the result of conditions affecting air traffic at just these 7 airports. DOT's and FAA's actions--including near-term elements of NextGen and other air traffic management improvements--could help reduce delays over the next 2 to 3 years and are generally being implemented at the airports that contribute to the most delays in the system. However, the extent to which these actions will reduce delays at individual airports or contribute to the agency's overall target is unclear. FAA has an 88 percent on-time arrival performance target for the national airspace system to measure how its actions help to improve systemwide on-time performance. This target, however, masks the wide variation in airport performance. For example, in fiscal year 2009, Newark had an on-time arrival rate of 72 percent, while St. Louis International exceeded the target with 95 percent. FAA has not established airport-specific performance targets, making it difficult to assess whether FAA's actions will lead to the desired on-time performance at these airports or whether further actions are required to improve performance, especially at airports affecting delays systemwide. Also, FAA's modeling indicates that even if all ongoing and planned NextGen and other improvements are implemented, a few airports, such as Atlanta, Washington Dulles International, and Philadelphia, may not be able to meet the projected increases in demand, and if market forces do not dampen that demand, additional actions may be required at these airports. However, without airport-specific targets, FAA cannot determine what additional actions might be required to achieve a targeted level of performance at these airports.
Tens of thousands of industrial facilities directly discharge wastewater into the waters of the United States and are subject to permit limits on their discharges, which for certain industries are determined by effluent guidelines set by EPA under the Clean Water Act. For certain industries, EPA issues a similar type of regulation—pretreatment standards— applicable to facilities that are indirect dischargers; that is, their effluent goes to wastewater treatment plants, which then discharge the collected and treated wastewater into a water body. To establish pollutant control limits for different pollutants in these guidelines or standards, EPA groups industrial facilities into categories that have similar products or services. To date, EPA has issued effluent guidelines or pretreatment standards for 58 industrial categories. EPA has issued effluent guidelines for 57 of the 58 categories and pretreatment standards for 35 of the 58 categories.Table 1 lists industrial categories that are regulated by effluent guidelines and pretreatment standards. According to EPA, there are approximately 35,000 to 45,000 direct dischargers covered by effluent guidelines and about 10,000 facilities that discharge indirectly to wastewater treatment plants. Before an industrial facility discharges pollutants, it must receive a permit that is to, at a minimum, incorporate any relevant pollutant limits from EPA’s effluent guidelines. Where needed to protect water quality as determined by standards set by individual states, NPDES permits may include limits more stringent than the limits in the guidelines. NPDES permits for direct dischargers are issued by 1 of the 46 states authorized by EPA to issue them and by EPA elsewhere. Unlike direct dischargers, indirect dischargers, which do not discharge to surface waters, do not require an NPDES permit. Instead, an indirect discharger must meet EPA’s national pretreatment standards and may have to meet additional pretreatment conditions imposed by its local wastewater treatment plant.Under the national pretreatment standards and conditions, an indirect discharger is required to remove pollutants that may harm wastewater treatment plant operations or workers or, after treatment and discharge, cause violations of the wastewater treatment plant’s permit. Figure 1 illustrates both types of facilities subject to regulation. To get an NPDES permit, industrial facilities’ owners—like any source discharging pollutants as a point source—must first submit an application that, among other things, provides information on their proposed discharges. Water quality officials in authorized states and EPA regional offices responsible for the NPDES program in the four nonauthorized states review these applications and determine the appropriate limits for the permits. Those limits may be technology-based effluent limits, water quality-based effluent limits, or a combination of both. Technology-based limits must stem from either effluent limitation guidelines, when applicable, or from the permit writer’s best professional judgment when no applicable effluent limitation guidelines are available. Using best professional judgment, permit writers are to develop technology-based permit conditions on a case-by-case basis, considering all reasonably available and relevant information, as well as factors similar to those EPA uses in developing guidelines for national effluent limitations. A permit writer should also set water quality-based limits more stringent than technology-based limits if necessary to control pollutants that could cause or contribute to violation of a state’s water quality standards. To support each permit, permit writers are supposed to develop a fact sheet, or similar documentation, briefly summarizing the key facts and significant factual, legal, methodological, and policy questions considered. The fact sheet and supporting documentation also serve to explain to the facility, the public, and other interested parties the rationale and assumptions used in deriving the limitations in the permit. Facilities with NPDES permits are required to monitor their discharges for the pollutants listed in their permits and to provide monitoring reports with their results to their permitting authority (the relevant state, tribal, or territorial agency authorized to issue NPDES permits or, in nonauthorized locations, EPA). For facilities designated by EPA regional administrators and the permitting authorities as major facilities, the permitting authorities are in turn required to transfer the monitoring report data to EPA headquarters. These reports, known as discharge monitoring reports, are transmitted electronically and stored in an electronic database or reported in documents and manually entered into the electronic database for use by EPA in reviewing permit compliance. required to report the discharge monitoring results from all remaining facilities, known as minor facilities, to EPA but may do so. According to EPA, there are about 6,700 major and 40,500 minor facilities covered by NPDES permits. EPA and the states are making a transition from one national database, known as the Permit Compliance System, to another known as the Integrated Compliance Information System: NPDES. The states are divided in their use of the two databases. Consequently, two databases contain discharge-monitoring reports. In our report, however, we refer to them collectively as “the database.” Facilities may also be required to report data to EPA’s Toxics Release Inventory on their estimated wastewater discharges. This inventory contains annual estimates of facilities’ discharges of more than 650 toxic chemicals to the environment. One of the inventory’s primary purposes is to inform communities about toxic chemical releases to the environment, showing data from a wide range of mining, utility, manufacturing, and other industries subject to the reporting requirements. As such, although the inventory is unrelated to the NPDES program, the Toxics Release Inventory contains estimated discharges of toxic pollutants for many NPDES-permitted facilities. Not all industrial categories covered by effluent guidelines—the oil and gas industrial category, for example—are necessarily required to report to the inventory. Under the Clean Water Act, EPA must establish effluent guidelines for three categories of pollutants—conventional, toxic, and nonconventional pollutants—and several levels of treatment technology. As defined in EPA’s regulations, conventional pollutants include biological oxygen demand, total suspended solids, fecal coliform bacteria, oil and grease, and pH. The Clean Water Act designates toxic pollutants as those chemicals listed in a key congressional committee report, which contains 65 entries, including, arsenic, carbon tetrachloride, and mercury, as well as groups of pollutants, such as halomethanes. Nonconventional pollutants are any pollutants not designated as a conventional or toxic pollutant; for example, EPA has developed limitations for such nonconventional pollutants as chemical oxygen demand,carbon, and the nutrients nitrogen and phosphorus. The act authorizes EPA to establish effluent limits for these three pollutant categories according to several standards; the standards generally reflect increasing levels of treatment technologies. A treatment technology is any process or mechanism that helps remove pollutants from wastewater and can include filters or other separators, biological or bacteria-based removal, and chemical neutralization. Legislative history of the Clean Water Act describes the expectation of attaining higher levels of treatment through research and development of new production processes, modifications, replacement of obsolete plans and processes, and other improvements in technology, taking into account the cost of treatment. Under the act, the effluent limits do not specify a particular technology to be used but instead set a performance level based on one or more particular existing treatment technologies. Individual facilities then have to meet the performance level set but can choose which technology they use to meet it. Under the act, EPA was to issue initial guidelines for existing facilities on the basis of the “best practicable control technology currently available” for conventional, toxic, and nonconventional pollutants—guidelines to be achieved by 1977—followed by guidelines set on the basis of “best available technology economically achievable” for toxic and nonconventional pollutants and “best conventional pollutant control technology” for conventional pollutants. The act also called for guidelines known as “new source performance standards,” which would apply to new facilities starting operations after such standards were proposed. When permitting authorities develop a permit, they apply standards most appropriate to a given facility: For example, a new facility would receive a permit with limits reflecting the new source performance standards. Existing facilities would generally receive permits with limits reflecting the best conventional technology and best available technology, but where those standards have not been issued, permit limits would reflect best practical treatment. Table 2 shows the different levels of treatment established in the act and the category of pollutant to which they apply. The Clean Water Act requires EPA to annually review all existing effluent guidelines and revise them if appropriate, and also to review existing effluent limitations at least every 5 years and revise them if appropriate. The Water Quality Act of 1987 added two related requirements to EPA’s reviews. First, EPA is to identify, every 2 years, potential candidates for new effluent guidelines, namely, industries that are discharging significant, or nontrivial, amounts of toxic or nonconventional pollutants that are not currently subject to effluent guidelines. Second, every 2 years beginning in 1988, EPA is required to publish a plan establishing a schedule for the annual review and revision of the effluent guidelines it has previously promulgated. In response to these two requirements, EPA published its first effluent guidelines program plan in 1990, which contained schedules for developing new and revised effluent guidelines for several industrial categories. From the start of the effluent guidelines program in the early 1970s, EPA has faced considerable litigation, with industry challenging most of the industry-specific effluent guidelines. As the agency implemented the program, EPA also faced challenges from environmental groups over its failure to issue guidelines and the process EPA used to screen and review industrial categories. For example, the Natural Resources Defense Council, an environmental organization, brought two suits, each seeking to compel EPA to meet its duties to promulgate effluent limitations for listed toxic pollutants, among other actions. As a result, EPA operated under two key consent decrees establishing court-approved schedules for it to develop and issue effluent guidelines regulations. In addition, under one of the consent decrees, EPA established a task force that operated from 1992 through 2000 and advised the agency on various aspects of the effluent guidelines program. In particular, the task force issued several reports advising EPA on changes to its screening and review process for the effluent guidelines program and recommended that EPA hold a workshop to discuss improvements to the process. In 2002, after considering the recommendations made by both the task force and the workshop, EPA developed an approach to guide its post- consent decree screening and review, issued in a document called A Strategy for National Clean Water Industrial Regulations. Under this draft strategy, EPA was to evaluate readily available data and stakeholder input to create an initial list of categories warranting further examination for potential effluent guidelines. The strategy identified the following four key factors for EPA to consider in deciding whether to revise existing effluent guidelines or to develop new ones: the extent to which pollutants remaining in an industrial category’s discharge pose a substantial risk to human health or the environment; the availability of a treatment technology, process change, or pollution prevention alternative that can effectively reduce the pollutants and risk; the cost, performance, and affordability of the technology, process change, or pollution prevention measures relative to their benefits; and the extent to which existing effluent guidelines could be revised, for example, to eliminate inefficiencies or impediments to technological innovation or to promote innovative approaches. The draft strategy also indicated that EPA would apply nearly identical factors to help determine whether it should issue effluent guidelines for industrial categories for which it had not yet done so. The document noted that EPA intended to revise and issue the strategy in early 2003, but EPA has chosen not to finalize it.agency made this choice because its implementation of the process was likely to evolve over time. EPA officials stated that the Since EPA issued its draft strategy, the agency has faced litigation challenging the use of technology in its screening process. In 2004, EPA was sued by Our Children’s Earth, a nonprofit environmental organization, which alleged that EPA failed to consider technology-based factors during its annual review of industrial categories. On appeal, the Ninth Circuit Court decided in 2008 that the statute did not establish a mandatory duty for EPA to consider such factors. The court found that the statute’s use of the phrase “if appropriate” indicated that decisions on whether to revise guidelines are discretionary but are also constrained by the statute’s mandate as to what effluent guidelines regulations are to accomplish. Further, the court stated that the overall structure of the Clean Water Act strongly suggests that any review to determine whether revision of effluent guidelines is appropriate should contemplate technology-based factors. EPA uses a two-phase process to review industrial categories potentially in need of new or revised effluent guidelines; from 2003 through 2010, the agency identified few such categories. Since 2003, EPA has annually screened all industrial categories subject to effluent guidelines, as well as other industrial categories that could be subject to new guidelines; it has identified 12 categories for further review and selected 3 categories to update or to receive new effluent guidelines. EPA’s screening phase starts with a review of industrial categories already subject to effluent guidelines—as well as industrial categories that are not—to identify and rank those whose pollutant discharges pose a substantial hazard to human health and the environment.and ranks industrial categories using pollutant data from facilities in similar industrial classifications. Before it ranks industrial categories in this screening phase, EPA excludes from consideration any industrial categories where guidelines are already undergoing revision or have been revised or developed in the previous 7 years. For example, EPA EPA analyzes announced in its 2010 final effluent guideline program plan that it excluded the steam electric power-generating category from the screening phase because the agency had already begun revising effluent guidelines for this industry. Also in 2010 EPA excluded the concentrated aquatic animal production category (e.g., fish farming) from screening because the agency issued effluent guidelines in 2004. In ranking industrial categories during the screening phase, EPA considers the extent to which discharged pollutants threaten human health and the environment—the first factor identified in EPA’s 2002 draft strategy. EPA compiles information from two EPA sources on the facilities within these industrial categories that discharge wastewater, the pollutants they discharge, and the amount of their discharge: (1) the discharge monitoring report database and (2) the Toxics Release Inventory. relative toxicity of pollutant discharges from screened industrial categories, converts these estimates into a single “score” of relative toxicity for each industrial category, and uses this score to rank the industrial categories according to the reported hazard they pose. To determine the relative toxicity of a given pollutant, EPA multiplies the amount (in pounds) of that pollutant by a pollutant-specific weighting factor to derive a “toxic weighted pound equivalent.” EPA’s ranking of one industrial category relative to other categories can vary depending on the amount of the pollutants it discharges or the toxicity of those pollutants. For example, an industrial category, such as pesticide chemicals, may discharge fewer pounds of pollutants than another category, such as canned and preserved seafood processing, but have a higher hazard ranking because of the relative toxicity of the pollutant chemicals it discharges. As explained above, an industrial direct discharger is required to have an NPDES permit regardless of whether there are effluent guidelines for the industry. NPDES permits require monitoring for specific pollutants to determine compliance with permit limits. Some industries may also be subject to requirements under another EPA program to report toxic releases to the Toxics Release Inventory. These requirements are independent of whether an industry is regulated by effluent guidelines. After ranking industrial categories, EPA identifies those responsible for the top 95 percent of the total reported hazard, which is the total of all industrial categories’ hazard scores. EPA assigns these industrial categories a high priority for further review in the second phase of its review process. As the relative amounts of their discharges change, the number of industrial categories making up this 95 percent can vary each year with each screening EPA performs. From 2003 through 2009, for example, 10 to 13 industrial categories composed the top 95 percent of reported hazard, whereas in 2010, 21 categories made up the top 95 percent. Figure 2 shows the number of industrial categories that EPA considered for possible further review on the basis of its hazard screening. After it identifies the industrial categories contributing to 95 percent of reported hazard, EPA takes additional steps to exclude industrial categories before beginning the further review phase. Specifically, the agency may exclude industrial categories on the basis of three criteria: Data used in the ranking process contained errors. After completing its ranking, EPA verifies the pollutant discharge data from the discharge monitoring reports and Toxics Release Inventory and corrects any errors. For example, according to EPA, the agency has found that facilities have reported the wrong unit of measurement in their discharge monitoring reports, or states have transferred data into the EPA database incorrectly. In such cases, a pollutant discharge may, for example, be reported at a concentration of 10 milligrams per liter but in fact be present at a concentration of 10 micrograms per liter—a thousand-fold lower discharge. Very few facilities account for the relative toxicity of an industrial category. EPA typically does not consider for further review industries where only a few facilities account for the vast majority of pollutant discharges and the discharges are not representative of the category as a whole. In such cases, EPA states in its effluent guideline program plans that revising individual NPDES permits may be more effective than a nationwide regulation to address the discharge. For example, in 2004, EPA determined that one facility was responsible for the vast majority of discharges of dioxin associated with the inorganic chemicals industrial category. In its effluent guideline program plan for that year, EPA indicated that it would work through the facility’s NPDES permit to reduce these discharges as appropriate. Other factors. EPA considers other factors in addition to those described above to determine if an industrial category warrants further review. According to EPA, one such factor is inadequate data from which to make a clear determination. For example, in its 2010 screening phase, EPA excluded several industrial categories from the further review phase because it did not have conclusive data but said that it would “continue to review” the categories’ discharges to determine if they were properly controlled. These industries included pulp, paper, and paperboard; plastic molding and forming; and waste combustors. Figure 3 illustrates the exclusion process EPA applies in its initial screening phase. During the screening phase, EPA uses existing industry classifications as the basis for identifying industrial categories. EPA groups these industry classifications, which are identified by one of two standardized coding schemes, into industrial categories that it then considers for effluent guidelines. If EPA identifies an industrial category that does not have effluent guidelines but has discharges that present a potential hazard, it decides whether the category produces a product or performs a service similar to one subject to existing effluent guidelines. If so, EPA generally considers the former category to be a subcategory of the latter. Conversely, if the products or services differ from categories subject to existing guidelines, EPA considers the category as a potential new category. In either case, EPA may decide that the industrial category warrants further review and, possibly, new effluent guidelines. Throughout the screening phase, EPA also obtains stakeholder and public input, which may identify industrial categories warranting new or revised effluent guidelines that were not identified by their hazard ranking. Stakeholder and public input comes from EPA’s solicitation of comments on its biennial preliminary and final effluent guidelines program plans. For example, in 2004 stakeholders raised concerns about discharges from dental facilities of mercury used in dental fillings; in response, EPA later identified the dental category for further review. On completing the screening phase, the agency lists in its preliminary or final effluent guidelines program plans the industrial categories it has identified for further review. Alternatively, EPA may decide on the basis of its screening criteria that no industrial categories warrant further review. In its further review phase, EPA conducts detailed studies of any industrial categories identified in its screening phase, using the four factors listed in its November 2002 draft strategy to determine whether the categories need new or revised effluent guidelines. Since issuing its draft strategy, EPA has selected 12 industrial categories to move beyond the screening phase to the further review phase. Seven of the categories—for example, the pulp, paper, and paperboard category and the petroleum refining category—were identified for further review on the basis of the risk or toxicity of the pollutants they discharge, and 5 were identified for review on the basis of stakeholder concerns. If the categories are already subject to effluent guidelines that EPA set, the agency studies the need to revise effluent limits in the existing guidelines; if the categories are not subject to existing guidelines, EPA studies the need to develop effluent limits and apply them for the first time. Of the 12 categories selected for further review, 8 were already subject to existing effluent guidelines, and 4 were not. During its further review phase, according to EPA documents, EPA gathers and analyzes more information on the factors identified in its draft strategy. During this phase, EPA typically analyzes information on the hazards posed by discharged pollutants, which corresponds to the first factor in its draft strategy. The data on hazards that EPA obtains and analyzes include: (1) characteristics of wastewater and of facilities; (2) the pollutants responsible for the industrial category’s relative toxicity ranking; (3) geographic distribution of facilities in the industry; (4) trends in discharges within the industry, and (5) any relevant economic factors related to the industry. During the further review phase, EPA also begins to gather and analyze information on the availability of pollution prevention and treatment technology for the industrial categories reviewed, which corresponds to the second factor identified in its draft 2002 strategy. Through this analysis, EPA identifies current technologies that industry is using to reduce pollutants, potential new technologies that could be used to reduce pollutants, or both. Table 3 summarizes EPA’s consideration of treatment technologies for the 12 industrial categories that proceeded to the further review phase. For example, EPA studied one technology used by the ore mining and dressing industrial category and several current technologies for the coalbed methane category. During its further review phase, EPA also obtains and analyzes information related to the cost, affordability, and performance of technologies, the third factor in its strategy. To do so, EPA examines the cost and performance of applicable technologies, changes in production processes, or prevention alternatives that may reduce pollutants in the industrial category’s discharge. As part of its cost analysis, the agency considers the affordability or economic achievability of any identified technologies, production processes, or prevention alternatives. To assess the performance of technologies, EPA considers the results of the treatment technologies used in tests or actual operations—information the agency obtains from published research papers and internal and external sources, including site visits and surveys of industrial facilities.its further review of the steam electricity power-generating industry, for example, EPA sampled wastewater directly at power plants, surveyed plant operators about which technologies they were using to minimize pollutant discharges and at what cost, and sought information on other potential treatment technologies. At the conclusion of its further review of an industrial category, EPA decides whether it is feasible and appropriate to revise or develop effluent guidelines for the category, a decision that includes gathering information on whether an effluent guideline is the most efficient and effective approach to manage the discharges, the fourth factor in EPA’s draft strategy. As shown in table 3, for example, EPA decided that the drinking water treatment industrial category did not require effluent guidelines but that the agency’s study could act as a resource for state permit writers as they issue permits for drinking water facilities. Or, as also shown in table 3 for coalbed methane, EPA decided to develop guidelines that it plans to propose in 2013. Some of the information EPA can consider during this decision making, and some of the information related to the fourth factor in its strategy, is the extent to which existing effluent guidelines could be revised to eliminate inefficiencies or impediments to technological innovation or to promote innovative approaches. Specifically, EPA considers whether another way exists—either regulatory or voluntary—to decrease pollutant discharges. For example, after the further review of the dental facility category in 2008, EPA decided not to develop effluent guidelines but to instead work with the American Dental Association and state water agencies on a voluntary reduction program to reduce pollutant discharges from dental facilities. It later changed its decision because the voluntary effort was shown to be ineffective, and the agency plans to issue effluent guidelines in 2012. It takes EPA, on average, 3 to 4 years to complete the further review phase for an industrial category. As of July 2012, EPA had identified three industrial categories for which it had decided to revise effluent guidelines—steam electric power generating—or to develop new effluent guidelines—coalbed methane extraction and dental facilities. According to agency documents and officials, EPA has chosen to take no action on the other 9 of the 12 categories it has further reviewed since 2002. Limitations in the screening phase of EPA’s review process may have caused the agency to overlook some industrial categories that warrant new or revised effluent guidelines and thus hinder the effectiveness of the effluent guidelines program in advancing the goals of the Clean Water Act. First, the data EPA uses in the screening phase has limitations that may cause the agency to omit industrial categories from further review or regulation. Second, EPA has chosen to focus its screening phase on the hazards associated with industrial categories, without considering the availability of treatment technologies or production changes that could reduce those hazards. The screening phase of the process may thus exclude some industrial categories for which treatment technologies or production changes may be available to serve as the basis for new or revised effluent guidelines. The two sources EPA relies on during its initial screening process— discharge monitoring reports and the Toxic Release Inventory—have limitations that may affect the agency’s ability to accurately rank industrial categories for further review on the basis of the human health and environmental hazards associated with those categories. Data from industrial facilities’ discharge monitoring reports have the benefit of being national in scope, according to EPA documents, but according to agency officials and some experts we spoke with, these data have several limitations that could lead the agency to underestimate the hazard caused by particular industries. Specifically: The reports contain data only for those pollutants that facilities’ permits require them to monitor. Under NPDES, states and EPA offices issue permits containing limits for pollutant discharges, but those permits may not include limits for all the pollutants that may be discharged, as for example, if those pollutants are not included in the relevant effluent guidelines or need not be limited for the facility to meet state water quality standards. If a pollutant is not identified in a permit, and hence not reported on discharge monitoring reports, it would not be part of EPA’s calculation of hazard and would not count toward the ranking of industrial categories. The reports do not include data from all permitted facilities. Specifically, EPA does not require the states to report monitoring results from direct dischargers classified as minor. According to EPA, the agency in 2010 analyzed data for approximately 15,000 minor facilities, or about 37 percent of the 40,500 minor facilities covered by NPDES permits. As a result, the pollutants discharged by the remaining 25,500 minor dischargers would not be counted as part of the relative toxicity rating and could contribute to undercounting of pollutants from those industrial categories. For example, most coal mining companies in Pennsylvania and West Virginia are considered minor dischargers whose pollutants would not count toward the ranking of that industrial category. The reports include very limited data characterizing indirect discharges from industrial facilities to wastewater treatment plants, according to EPA documents. Thus, the data do not fully document pollutants that, if not removed by a wastewater treatment plant, are discharged. These data are not incorporated into EPA’s calculations of hazard for each industrial category, and thus result in underestimated hazards. EPA documents and some experts we contacted also stated that data collected in the Toxics Release Inventory are useful to identify toxic discharges. Nevertheless, according to the agency and experts, these inventory data have limitations that may cause EPA to either overestimate or underestimate the relative toxicity of particular industrial categories. The limitations they identified include the following: The data reported are sometimes estimates and not actual monitored data. In some cases, the use of an estimate may overreport actual pollutant discharges. For example, some industry experts said that to be conservative and avoid possible liability, some facilities engaging in processes that produce particularly toxic pollutants, such as dioxin, may report the discharge of a small amount on the basis of an EPA- prescribed method for estimating such discharges even if the pollutant had not been actually monitored. Not all facilities are required to report to the inventory, which may lead to undercounting the discharges for the industrial categories of which the facilities are a part. Facilities with fewer than 10 employees are not required to report to the inventory, and neither are facilities that do not manufacture, import, process, or use more than a threshold amount of listed chemicals. For example, facilities that manufacture or process lead or dioxin do not need to report to the inventory unless the amount of chemical manufactured or processed reaches 10 pounds for lead or 0.1 grams for dioxin. Despite the limitations of these data sources, EPA officials said that discharge monitoring reports and the Toxic Inventory Release are the best available data on a national level. Experts we interviewed also generally supported the continued use of these data sources despite their limitations. An EPA official responsible for the screening and review process said that EPA could not quantify the effect of the missing data on its ranking and setting of priorities for industries without time-consuming and expensive collecting of data directly from industrial facilities. Still, agency officials agreed that the data limitations can lead to under- or overestimating the hazard of discharges from industrial categories, which could in turn affect the rankings of these categories and potentially result in different categories advancing for further review and potential regulation. EPA’s primary focus during its screening phase is the relative hazard posed by industrial categories, without consideration of available treatment technologies that could be used as the basis for revised effluent guidelines to help reduce pollutant discharges. Because EPA sets the cutoff point in its screening process as industrial categories contributing to 95 percent of total reported hazard, the agency does not consider for further review the categories contributing to 5 percent of the total reported hazard. Although this percentage is low, the categories involved constitute the majority of all industrial categories with effluent guidelines. EPA does not conduct a further review for these and other industrial categories that it has excluded for other reasons, meaning that EPA does not examine them for the availability of more-effective treatment technologies. As previously noted, the Ninth Circuit Court held in 2008 that EPA does not have a mandatory duty to consider technology in its screening process but stated that the act strongly suggests that any review to determine whether revision of effluent guidelines is appropriate should contemplate technology-based factors. Regardless of whether EPA is required to do so, the agency is not considering technology for these industrial categories, and hence EPA cannot ensure that the facilities in these categories are using the best available treatment technology. EPA has begun to take actions to improve the hazard data it uses in its screening of industrial categories, but it is not fully using potential sources of information on treatment technologies for consideration in this screening. According to program officials, EPA has recognized that its screening phase has resulted in the same industries rising repeatedly to the top of its hazard rankings. Program officials said that they are considering changes to their screening approach to identify additional industrial categories for further review. The primary change, the officials told us, would be to rank categories according to toxicity every 2 years, rather than annually, and to supplement that ranking with a targeted analysis of additional sources of data. To develop such revisions, officials from EPA’s effluent guidelines program engaged in an informal “brainstorming” exercise within the agency and identified several sources of data on new and emerging pollutants, sources that officials think could help target industrial categories for further review. EPA officials said they will propose revisions to the review process in the 2012 preliminary effluent guidelines program plan they expect to issue late in 2012. To mitigate the limitations with hazard data that EPA currently experiences, the agency has taken several steps to obtain new sources of information and to improve existing sources. Using additional sources of data is consistent with suggestions made to us by several academic and governmental experts we interviewed that other sources of hazard data may be useful to the agency, including additional monitoring data and data on the quality of water bodies receiving wastewater discharges. The new data sources would broaden the hazard data considered in the screening phase. Among the sources EPA intends to pursue for future use are the following: a 2009 EPA survey of sludge produced by wastewater treatment plants to identify pollutants entering these plants, indicating that they are not being treated by an industrial facility and might need regulation; a review of action plans prepared under EPA’s Office of Pollution Prevention and Toxic Substances for specific chemicals of emerging concern to identify pollutants that are likely to be discharged to waters by industrial point sources; a review of all EPA air pollution regulations issued within the last 10 to15 years to identify new treatment processes that could add to or change the pollutants in wastewater streams; and a review of data and information available concerning industries that EPA is considering for a proposed expansion of required reporting for the Toxics Release Inventory. EPA is also drafting a rule that would increase the information EPA receives electronically from discharge monitoring reports from NPDES permittees and permitting authorities. According to officials with the effluent guidelines program, increased electronic reporting would result in a more complete and accurate database and improve their access to the hazard data from facilities’ discharge monitoring reports, thereby improving the screening of industrial categories. For example, according to EPA officials, data on minor facilities that are not currently reported into the discharge monitoring database used in the screening process would be reported under the electronic reporting rule, as sent to the Office of Management and Budget for review. EPA recognizes the need to use information on treatment technologies in the screening phase to improve its process and has taken some initial steps to develop a database of such information, but it has not made full use of potential data sources. EPA started to gather information on treatment technology in 2011, contracting with consultants to obtain relevant literature for the database. In its comments on a draft of this report, the agency said that it will expand on this work in 2013 and 2014 once new fiscal year operating plans are in place. According to agency officials, a thorough analysis of the literature would give the program an updated technology database, which would help in identifying advances in technologies in use or with potential use in industrial categories, which, on the basis of these advances, may in turn warrant further review. They noted that in the 1980s and 1990s, the program used such information from an agency database but that the database had become outdated. In more than half of our interviews (10 of 17), experts told us that EPA should consider technology in its screening phase, and some of them suggested the following two approaches for obtaining this information: Stakeholder outreach. Experts suggested that key stakeholders could provide information on technology earlier in the screening process. Currently, EPA solicits views and information from stakeholders during public comment periods following issuance of preliminary and final effluent guidelines plans. According to experts, EPA could obtain up-to-date information and data from stakeholders beyond these formal comment periods. For example, EPA officials could (1) attend annual workshops and conferences hosted by industries and associations, such as engineering associations, or host their own expert panels to learn about new treatment technologies and (2) work with industrial research and development institutes to learn about efforts to reduce wastewater pollution through production changes or treatment technologies. NPDES permits and related documentation. Experts suggested that to find more information on treatment technologies available for specific pollutants, EPA could make better use of information in NPDES permit documentation. For example, when applying for NPDES permits, facilities must describe which pollutants they will be discharging and what treatment processes they will use to mitigate these discharges. Such information could help EPA officials administering the effluent guidelines program as they seek technologies to reduce pollutants in similar wastewater streams from similar industrial processes. Similarly, information from issued NPDES permits containing the more stringent water quality-based limits— which may lead a facility to apply more advanced treatment technologies—could suggest the potential for improved reductions. Further, information in fact sheets prepared by the permitting authority could also furnish information on pollutants or technologies that could help EPA identify new technologies for use in effluent guidelines. According to EPA officials, these two sources of information have not been extensively used. They said that they would like to obtain more stakeholder input during screening and review, but they have limited time, resources, and ability to work with stakeholders. They noted that the effluent guidelines program does assign staff members responsibility for keeping up with technologies and developments in specific industrial categories. They also said that the NPDES information suggested by experts is not current or readily available for use by the program. Our analysis of NPDES information, however, showed that EPA has not taken steps to make the information available for use by the effluent guidelines program. For example, the standard list of treatment processes on the NPDES application form has not been updated since 1980, and EPA officials said it was out of date. Yet EPA has not updated this information or provided it to the effluent guidelines program for use in screening available technologies. EPA could have done so through a second rulemaking effort under way to improve NPDES data—in which EPA is updating NPDES application forms to make them more consistent with NPDES regulations and current program practices—but chose not to. Agency documents about this rulemaking described it as modifying or repealing reporting requirements that have become obsolete or outdated over the past 20 years and modifying permit documentation procedures to improve the quality and transparency of permit development. Nonetheless, effluent guidelines program officials said that they did not request potential NPDES permit updates relevant to their program because the scope of this rulemaking was too narrow. EPA’s Office of Wastewater Management, which is responsible for the rulemaking, confirmed that the scope of the proposed rule is to be narrow and not call for states or permittees to provide new information. Further, fact sheets or similar documentation that NPDES permit writers develop describing the basis for permit conditions are not stored in EPA’s electronic NPDES database and are therefore difficult to obtain and analyze, according to program officials. Instead, these NPDES documents are now maintained by the authorized states or EPA regions and are not readily accessible to the effluent guidelines program. Program officials said that electronic transmission of fact sheets or information about the basis for permit limits could be useful in identifying treatment technologies, although the scope of the electronic reporting rulemaking did not include such documents or information. Officials from the Office of Enforcement and Compliance Assurance, the office responsible for this rulemaking, told us that they discovered such wide variability among the states’ practices for gathering and managing NPDES information like fact sheets or the basis for permit limits that it would be difficult to call for electronic reporting of such information. EPA and the nation have made great strides in reducing the pollutants in wastewater discharged from point sources, such as industrial facilities, since the Clean Water Act was passed. EPA’s effluent guidelines program has been key in contributing to these results by establishing national uniform limits on pollutant discharges for various industrial categories. Progress within the program has slowed, however, and numerous effluent guidelines for particular industrial categories have not been revised for 2 or 3 decades, although the act calls for EPA to routinely review its effluent guidelines and update or add to them as appropriate. EPA’s approach for screening and further reviewing industrial categories, as currently implemented, has not identified many categories for the agency to consider for new or revised guidelines, and the screening process has identified many of the same industrial categories year after year. EPA’s approach focuses its resources on the most hazardous sources of pollution, but its reliance on incomplete hazard data during the screening phase has limited the results of the approach, as has EPA’s inability to thoroughly collect treatment technology data within its resource constraints. Under EPA’s current approach, most industrial categories have not received a detailed further review examining the availability of more-effective treatment technologies. According to some experts, consideration of treatment technologies is especially important for older effluent guidelines because changes in either the industrial categories or the treatment technologies are more likely to have occurred, making it possible that new, more advanced and cost-effective treatment technologies have become available. EPA has recently taken steps to obtain more information on treatment technologies for use in its screening phase—which could help make up for limitations in the hazard data it currently uses—but it has not taken steps to improve and gain access to technology information from the NPDES program. Further, EPA is reconsidering its approach to its screening and review process—initially documented in its draft strategy that was never finalized—but has not analyzed a range of possible sources of data to improve the program, including taking full advantage of the NPDES database, obtaining relevant stakeholder input, and reviewing older effluent guidelines for changes in either the industry or available treatment technologies. Without evaluating a range of new sources of relevant information, officials with the effluent guidelines program cannot ensure that the reconsidered approach can be implemented or that it optimizes the agency’s ability to consider technology in the screening process. Most important, without a more thorough and integrated screening approach that both improves hazard information and considers treatment technology data, EPA cannot be certain that the effluent guidelines program is reflecting advances in the treatment technologies used to reduce pollutants in wastewater. To improve the effectiveness of EPA’s efforts to update or develop new effluent guidelines, we recommend that the Administrator of EPA direct the effluent guidelines program to take the following three actions, as it considers revisions to its screening and review process: Identify and evaluate additional sources of data on the hazards posed by the discharges from industrial categories. Identify and evaluate sources of information to improve the agency’s assessment in the screening phase of treatment technologies that are in use or available for use by industrial categories, including better use of NPDES data. Modify the screening phase of its review process to include thorough consideration of information on the treatment technologies available to industrial categories. We provided a draft of this report to EPA for review and comment. In its written comments, which are reproduced in appendix IV, EPA said that our report adequately describes the agency’s effluent guidelines program and agreed in principle with two of the report’s recommendations but disagreed with the third recommendation. EPA also provided several technical comments, which we have incorporated as appropriate. Regarding our first recommendation, that EPA identify and evaluate additional sources of data on the hazards posed by industrial discharges and factor these into its annual reviews, EPA agreed that additional sources of such data are valuable. For this reason, EPA said, it began collecting new sources of hazard information in 2011, which the agency is using in its 2012 annual review. EPA also said that its preliminary 2012 effluent guideline program plan will solicit additional ideas for new hazard data sources from the public and industry stakeholders. We described EPA’s ongoing and planned efforts in our report, but because the agency has not yet published its preliminary 2012 effluent guideline program plan, we cannot determine the extent to which these efforts address the limitations we identified in its hazard data. Likewise, we are not able at this time to confirm that EPA will solicit additional sources of such data from stakeholders. We support EPA’s stated intent to identify and evaluate additional sources of hazard data and retain our recommendation, reinforcing the need for the agency to continue the efforts it has begun. Regarding our second recommendation, that EPA should identify and evaluate additional sources of information to improve its assessment of treatment technologies for industrial dischargers, EPA agreed that treatment technology information is useful to its program. The agency added that, given the importance of new treatment technology information, in 2011 it initiated efforts to gather more treatment information across all industry categories and will be expanding on this work in 2013 and 2014, once new fiscal year operating plans are in place. We described EPA’s initiative to obtain and review technical literature on treatment technology in our report. We nevertheless believe that EPA could use other sources of information on treatment technology, including information associated with NPDES permits, as described in the report. We continue to believe that EPA should identify and evaluate these and other sources of information on treatment technologies, with the goal of ensuring that the agency’s effluent guidelines reflect the best available treatment technologies that are economically achievable. Regarding our third recommendation, that EPA modify the screening phase of its review process to include a thorough consideration of information on the treatment technologies available to industrial categories, EPA agreed that factoring treatment technology information into its reviews is valuable. The agency said, however, that the recommendation was not workable in the context of the agency’s current screening phase, noting that such an effort would be very resource intensive. Our concern is that EPA’s current screening phase, while targeted toward high-risk industries, does not ensure that effluent guidelines incorporate the best available treatment technologies that are economically achievable. We acknowledge that evaluating technologies for all existing industrial categories could be difficult for EPA to accomplish on an annual basis under its current approach. Our recommendation, however, did not specify that such an evaluation be done every year. For example, EPA could commit to a detailed study of the technologies in use and available to an industrial category on a periodic basis (i.e., every 5-10 years). As noted in our report, EPA’s 2002 draft strategy recognized the importance of evaluating treatment technologies in its screening phase, and the Court of Appeals for the Ninth Circuit held that, while not mandatory, the Clean Water Act strongly suggests that in determining whether the revision of effluent guidelines is appropriate—which begins with the screening phase—the agency should contemplate technology-based factors. However, we are not aware of any detailed EPA evaluation of options for considering technology during the screening phase since the agency announced in 2003 that performing a meaningful screening-level analysis of the availability of treatment technologies as planned in the draft strategy was “much more difficult than anticipated.” We believe that, nearly a decade later, EPA should, within the constraints of available resources, evaluate current options to consider such technologies in its screening phase. Furthermore, given its efforts to develop and update its technology information, we believe that EPA should clarify how it plans to incorporate this information in its screening phase. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of EPA, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To examine the process the Environmental Protection Agency (EPA) follows to screen and review industrial categories and the results of that process, we reviewed the Clean Water Act and relevant court decisions and agency documents, interviewed agency officials and experts, and documented the steps EPA has taken to screen particular industrial categories for possible new or revised effluent guidelines. Specifically, we reviewed relevant portions of the Clean Water Act to determine EPA’s responsibilities regarding the effluent guidelines and pretreatment programs. We analyzed several court decisions that ruled on challenges to EPA’s effluent guidelines program to determine what, if any, impact they had on the agency’s screening and review process. Further, we interviewed officials in EPA’s Engineering and Analysis Division to learn how the agency has used the process to screen and review industries. We focused our review on the results of the process EPA used from 2003 through 2010 in order to examine the approach it developed after the publication in November 2002 of its draft Strategy for National Clean Water Industrial Regulations: Effluent Limitation Guidelines, Pretreatment Standards, and New Source Performance Standards. By the end of our review, EPA had not yet published a preliminary or final effluent guideline program plan for the 2011-2012 planning cycle. To document the results of EPA’s process, we examined the agency’s screening decisions for all industrial categories from 2003 through 2010. Specifically, we examined EPA’s final effluent guideline plans and technical support documents for 2004, 2006, 2008, and 2010 and the agency’s website to identify screening decisions and subsequent studies associated with particular industries. We examined these studies to identify those industries that EPA subjected to further review, which included an examination of available treatment technologies. Specifically, we examined preliminary and detailed studies for the 12 industries that EPA advanced beyond the screening phase into further review and selected 7 of them for more robust analysis to document how EPA had applied the process to those industries. The 7 industries were ore mining and dressing, coalbed methane extraction, steam electric power generation, chlorine and chlorinated hydrocarbon, drinking water treatment, pharmaceuticals management, and dental facilities. That analysis included in-depth interviews with EPA staff assigned to those industrial categories. These 7 industrial categories met our selection criteria that they be active or recently active, that is, that EPA was reviewing them or had made a decision to proceed or not to proceed with a rulemaking as recently as 2011 or 2012. We also documented the current status of any regulatory actions or other steps that EPA had taken with the other 5 industries that received a further review. We also examined the planning documents for 2 industrial categories—airport deicing and construction and development—that did not go through EPA’s 2003-2010 screening and review process but were the subject of regulatory activity during our study period. To examine limitations to EPA’s screening and review process, if any, that could hinder the effectiveness of the effluent guidelines program in advancing the goals of the Clean Water Act, we pursued three separate methodologies: we (1) interviewed a cross section of experts on EPA’s effluent guidelines program, (2) surveyed the water quality permit directors of the 46 states that are authorized to issue permits for the National Pollutant Discharge Elimination System (NPDES), and (3) analyzed information about the hazard data sources EPA uses in its screening process. We identified individuals for possible “expert” interviews by compiling a list of approximately 50 people from a variety of sources relevant to the effluent guideline program, including referrals from EPA, the Association of Clean Water Agencies, and the National Association of Clean Water Agencies and by consulting other knowledgeable individuals, relevant academic literature, and litigation documents. We classified the individuals by their affiliation with a particular stakeholder category (academia, industry, nongovernmental organization, or state and local water quality agencies). We then excluded from consideration 13 individuals for whom we could not obtain contact information. We called or sent an electronic message to those individuals for whom we had contact information to ask if they were familiar with EPA’s current effluent guidelines screening and review process. We excluded from consideration those individuals who told us that they were not familiar with these processes, those who could not speak with us during the time frame of our review, and those who said they were not interested in contributing to our review. From our larger list of approximately 50 experts, we selected 22 individuals to interview whom we determined to be experts on the basis of their familiarity with the program and their affiliation with a particular stakeholder category. We conducted 17 interviews including these 22 individuals from February 2012 to April 2012. Six of these interviews were with officials from industry, 4 from academia, 4 from state and local government, and 3 from nongovernmental organizations. In 4 cases, more than one expert participated in an interview. We prepared and asked a standard set of questions about the overall effectiveness of the effluent guidelines program and EPA’s use of hazard data, stakeholder input, and information on treatment technology in the screening process. We then reviewed their responses to identify common themes. The sample of experts is a nonprobability sample, and we therefore cannot generalize their opinions across all experts on the effluent guideline program. To assess the extent to which effluent guidelines might need to be revised, we conducted a web-based survey of state water quality directors, and we statistically analyzed the data. Appendix II presents a complete description of our survey and our data analysis. To obtain information about an industry that EPA had not analyzed in a further review phase, we selected one of the nine industries that states in our survey said presented a risk to human health or the environment, had treatment technology available to reduce that risk, and warranted revision. We asked officials from the five states whose responses for the metal finishing industry met all three of the above criteria a standard set of questions about the risk the metal finishing industrial category posed, the technology available to mitigate this risk, and the likely effect of a revised effluent guideline. We further interviewed experts about their views on the adequacy of the hazard data that EPA uses in its screening process—discharge monitoring reports and the Toxics Release Inventory—and whether the experts had suggestions for alternative data sources. We also reviewed EPA’s own examinations of the benefits and limitations associated with the two data sources. EPA reports on these examinations of data quality in the technical support documents that accompany its effluent guideline program plans. In addition, we interviewed officials from EPA’s Office of Enforcement and Compliance Assurance to learn about the management of the databases that store discharge monitoring data. We also interviewed officials from the Engineering and Analysis Division in EPA’s Office of Water about possible effects that incomplete or inaccurate data could have on the screening process. We did not perform an independent assessment of data quality, although we concluded from the information we gathered that the data do have limitations that could affect EPA’s screening process. To examine the actions EPA has taken to address any limitations in its screening and review process, we interviewed effluent guideline program officials from the Engineering and Analysis Division about their plans to modify the biennial screening and review process. We also reviewed papers prepared for the division by a contractor, which describe new sources of data that the division could use to identify industrial categories potentially posing environmental hazards and warranting further review for possible new or revised effluent guidelines. In addition, we interviewed officials from the Engineering and Analysis Division, the Office of Wastewater Management, and the Office of Enforcement and Compliance Assurance about agency efforts to revise the NPDES permitting process and the database that contains NPDES permit information. We conducted these interviews to determine what steps EPA has taken or could take to use these activities to improve the hazard and treatment technology data available for the screening process. We conducted this performance audit from September 2011 to September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To assess the extent to which effluent guidelines might need to be revised, and to better understand the reasons for any such revisions, we conducted a web-based survey of state water quality officials, and we statistically analyzed patterns in the survey data. Our analysis identified numerous industries in numerous states for which state officials think that EPA should revise its guidelines. Furthermore, our analysis suggests that a few key factors—particularly, the significance of risk posed by effluent and the availability of pollution control technology—largely influence these officials’ views about whether guidelines should be revised. Details about our survey and our data analysis follow. We designed our survey to ask respondents both (1) whether they thought EPA should revise effluent guidelines for certain industrial categories and (2) whether they thought the major factors that EPA considers when revising effluent guidelines were present for these industrial categories in their state. We reviewed EPA’s 2002 draft Strategy for National Clean Water Industrial Regulations and identified the four key factors that the agency uses to determine whether effluent guidelines should be revised. These factors include (1) whether the effluent from a particular industrial category poses a significant risk to human health or the environment; (2) whether technology is available to substantially reduce the risk; (3) whether industry could adopt the technology without experiencing financial difficulty; and (4) whether other factors are present, such as whether current effluent guidelines for that industrial category are difficult to administer and whether revised guidelines could promote innovative regulatory approaches. We summarized these factors, using the exact language from EPA’s guidance wherever possible, and wrote survey questions that were simple enough to yield valid responses. We determined that the fourth factor was too complicated to be expressed as a single survey question, and we divided it into two simpler questions. By designing the questionnaire in this way, we sought to increase the reliability of our survey data in two ways: First, asking respondents to assess each of the factors that EPA considers for revision before providing their views about whether EPA should revise effluent guidelines focused their attention on providing an informed opinion. Second, by obtaining data on both the decision-making factors and the need for effluent guideline revisions, we were able to conduct a statistical analysis to identify how these factors appear to influence states’ views about the need for guideline revisions. Our survey was divided into three sections. In the first section, we asked states to respond to a series of questions about each of the five industrial categories that release the greatest amount of toxic effluent in their state. We originally considered surveying states about each of the 58 industrial categories regulated by effluent limitation guidelines. During initial interviews with state officials, however, we determined that this approach would be burdensome and impractical. Therefore, we used data on pollutant discharges from EPA’s Toxics Release Inventory and discharge monitoring reports to select the five industries that discharged the greatest amount of toxic effluent in each state in 2010. For each of these five industrial categories, we asked states six questions, the first five of which pertain to EPA’s decision-making factors and the last of which pertains to the need for revised effluent guidelines. The six questions we asked about each industry are as follows:1. Are the existing effluent guidelines for this industry sufficient on their own—that is, without additional water quality-based effluent limits—to protect your state from significant risks to human health or the environment? 2. Is there a technology, process change, or pollution prevention action that is available to this industry that would substantially reduce any risks that remain after the state applies existing effluent limits? In the online version of the questionnaire, we customized the survey questions by inserting the name of each of the specific industries for each state. 3. Do you think this industry can afford to implement this risk-reducing technology, process change, or pollution prevention action without experiencing financial difficulty? 4. Are the current effluent guidelines for this industry difficult to understand, implement, monitor, or enforce? 5. Do you think the current effluent guidelines for this industry could be revised to promote innovative approaches, such as water quality trading or multimedia benefits? 6. Given your responses to the previous questions, do you think EPA should revise the current effluent guidelines for this industry? In addition to asking about the top five industrial categories in each state, we asked states about two other sets of industrial categories. First, we asked state officials to list up to three other categories that were not among the top five in their state but for which they thought the effluent guidelines should be revised. Second, we asked these officials to list up to three categories that are not regulated by effluent guidelines but for which they think EPA should consider developing guidelines. To be confident that our questions would yield reliable data, we conducted four pretests with state officials. During these pretests, we sought to determine whether the questions were clear, could be reliably answered, and imposed a reasonable burden on respondents. We administered our survey to the directors of the water quality programs in the 46 states that are authorized to implement NPDES. These state officials are largely responsible for issuing permits to industrial facilities and for incorporating effluent guidelines into those permits. They have regular, firsthand experience with the guidelines, and their experience may supplement EPA’s information on effluent. We determined that these officials were therefore sufficiently knowledgeable to answer our survey questions. We obtained a list of these officials and their contact information from EPA and verified this list through Internet searches and phone calls with state officials. We identified the primary contact for each state but asked these individuals to consult with others in their office to determine the most accurate answer for each survey question. We implemented our survey as a web-based questionnaire. We notified the state water quality permit directors in February 2012 of our intent to conduct the survey and requested their participation. We instructed the states on how to access the web-based survey on March 2, 2012. We sent three e-mail reminders and telephoned states that had not responded before we closed the survey in April. We received responses from 31 of the 46 states, for an overall response rate of 67 percent of states. The survey data are based on responses from 42 individuals in these 31 states. Because we surveyed state officials only about the industrial categories that discharge the greatest amount of toxic effluent in their state, and because several states did not respond to our survey, the results of our analysis are not generalizable to all industrial categories in all states. To determine the extent to which state officials think that effluent guidelines should be revised, we analyzed the univariate frequencies of responses to our six primary survey questions. We aggregated the survey responses to create industry-by-state cases, such that each case represented the views of a particular state about the guidelines for a particular industrial category in that state. The completed survey questionnaires from 31 states led to 155 possible state-by-industry cases. Because not all states responded to all of the survey questions, however, we had at most 123 valid cases for analysis, depending upon the survey question. A summary of the responses to these questions appears in table 5. These tabulations indicate that a substantial number of cases exist for which states thought that EPA should revise effluent guidelines and also for which they perceived that one or more of EPA’s decision-making factors were present. In 51 percent (63 of 123 cases), state officials said that EPA should revise the effluent guidelines for the corresponding industry. With regard to whether the key decision-making factors were present, state officials reported that effluent posed a significant risk in 57 percent of cases, that technology was available in 31 percent of cases, that the guidelines were difficult to administer in 24 percent of cases, and that revised guidelines could promote innovative approaches in 36 percent of cases. We had far fewer responses to our question about whether industry could adopt technology without experiencing financial difficulty because that question was applicable only if the respondent said such technology was available. Among these cases, state officials reported that the technology would not cause financial hardship to the industry in 82 percent of cases (31 of 38 cases). We repeated this analysis after removing the 29 cases representing the three industrial categories whose effluent guidelines are in revision, leaving at most 96 cases for analysis, depending upon the question. Of the remaining cases, state officials said that EPA should revise the effluent guidelines for a substantial percentage of them; they also said that key decision-making factors were present in a substantial percentage of cases. For example, in 46 percent of these cases, state officials said that EPA should revise the effluent guidelines for the corresponding industry. We compared state officials’ views about whether effluent guidelines should be revised with their views of each of the factors that EPA uses when considering guideline revisions. For three of the four factors, our results show that when state officials perceived the factor to be present, they were significantly more likely to think that EPA should revise the effluent guidelines for the corresponding industrial category. (We had too few cases with valid responses to the survey question about cost to determine whether that factor was significantly associated with views about guideline revisions.) The risk posed by effluent and the availability of technology were the strongest predictors of states’ views about the need for guideline revisions. In particular, we found the following: When state officials perceived effluent from a particular industrial category to pose a significant risk, they were 3.8 times more likely to think that EPA should revise the guidelines for that category than when they did not perceive the effluent to pose a significant risk. Specifically, among the cases in which state officials perceived effluent to pose a significant risk, they thought the effluent guidelines should be revised 75 percent of the time (52 of 69 cases), compared with 20 percent of the time (10 of 51 cases) when they thought the effluent did not pose a significant risk. When state officials perceived technology to be available to substantially reduce the risk for a particular industrial category, they were 4.3 times more likely to think that EPA should revise the guidelines for that category than when they did not perceive technology to be available. Specifically, among the cases in which these officials perceived technology to be available, they thought EPA should revise the effluent guidelines 84 percent of the time (32 of 38 cases), compared with 20 percent (10 of 51 cases) when they thought that technology was not available. When state officials thought that other factors were present for a particular industrial category, they were 2.3 times more likely to think that EPA should revise the guidelines than when they did not think these factors were present. “Other factors” refers to either that the current guidelines were difficult to understand, implement, monitor, or enforce or that revised guidelines could promote innovative approaches. Specifically, when state officials thought that such other factors were present, they thought that EPA should revise its effluent guidelines 70 percent of the time (43 of 61 cases), compared with 30 percent of the time (18 of 60 cases) when they thought these factors were not present. Table 6 presents the complete results of these bivariate comparisons. We excluded one of the factors from the discussion above—namely, whether the industry could afford to implement the technology, process change, or pollution prevention action—because the responses to this question applied only to the subset of cases for which such a technology, change, or action was available, only 33 of which provided a yes or no response. In 87 percent of those cases in which the technology was perceived to be affordable (27 of 31 cases), state officials said that EPA should revise its guidelines for the corresponding industry. We repeated this analysis after removing the 29 cases representing the two industrial categories whose effluent guidelines EPA is already revising. We found that, even after removing these cases, the same three factors retained a significant relationship with state officials’ views about whether effluent guidelines should be revised. This result indicates that these key decision-making factors appear to influence state officials’ views even for industrial categories whose guidelines EPA is not already revising. To understand how the various decision-making factors interact to influence states’ views about the need for revised effluent guidelines, we used the data from our survey to conduct decision-tree analysis. We developed the decision tree by splitting the data into smaller and smaller subgroups according to whether state officials perceived each of the factors to be present for a particular industrial category. Beginning with the first factor, risk, we divided the cases into subgroups, depending upon whether state officials perceived the effluent from the particular industry to pose a significant risk to human health or the environment. For each of these subgroups, we tabulated the number of cases in which state officials said the effluent guidelines should be revised, compared with the number of cases in which they said the guidelines should not be revised. We then split these subgroups again, according to whether state officials thought that technology was available to substantially reduce the risk. This split resulted in further subgroups. We continued splitting the data into smaller and smaller subgroups by next assessing state official’s views of the cost of technology and finally assessing their views on the presence of other factors. At each step, we stopped splitting the data if (1) the original group had fewer than 10 cases, (2) the resulting subgroups did not differ significantly in terms of the percentages of respondents who said that EPA should revise the guidelines; or (3) the resulting subgroups tended to support the same conclusion as to whether EPA should revise the guidelines. We examined the cases terminating in each of the branches and found that the overall decision tree was based on a broad variety of industries and states. The resulting decision tree, which is shown in figure 5, has four splits and six branches. The decision tree illustrates how the key decision-making factors collectively predict states’ views about whether EPA should revise effluent guidelines, and it corroborates the reliability of our survey data. Overall, when the risk of effluent was perceived to be significant and technology was perceived to be available, state officials overwhelmingly thought the corresponding effluent guidelines should be revised. Even when technology was not perceived to be available, many states still thought the guidelines should be revised if they thought that other factors were present. In particular, in three scenarios, corresponding to three branches of the decision tree, state officials generally said that effluent guidelines should be revised: When state officials thought that effluent from an industrial category poses a significant risk to human health or the environment and when they thought technology was available to substantially reduce that risk, they generally said that EPA should revise the effluent guidelines. In such instances, they thought that EPA should revise the effluent guidelines 83 percent of the time (in 30 of 36 cases). This scenario is illustrated by the far left branch of the decision tree. When state officials thought that effluent from an industrial category poses a significant risk, they generally thought that EPA should revise the effluent guidelines even when they perceived that technology was not available—as long as they perceived other factors to be present. In such instances, they thought that EPA should revise its effluent guidelines 83 percent of the time (5 of 6 cases). This scenario is illustrated by the second-to-left branch of the decision tree. When state officials thought that effluent from an industrial category poses a significant risk, they generally thought that EPA should revise the effluent guidelines even when they did not know if technology was available—as long as they perceived other factors to be present. In such instances, these officials thought EPA should revise its effluent guidelines 100 percent of the time (11 of 11 cases). This scenario is illustrated by the branch of the decision tree in the third column from the right. By contrast, in two scenarios, state officials thought EPA should not revise the guidelines. In the primary scenario, officials did not perceive the effluent to pose a significant risk, although officials also thought that guidelines should not be revised when the risk was significant but neither technology nor other factors were present. In particular, our decision tree identified the following two scenarios: When state officials did not think the effluent from a particular industrial category posed a significant risk to human health or the environment, they generally thought that EPA should not revise the corresponding effluent guidelines. In these instances, state officials thought that EPA should not revise the guidelines 80 percent of the time (41 of 51 cases). This scenario is illustrated by the branch of the decision tree on the far right. When state officials thought the effluent from a particular industrial category posed a significant risk but that technology was not available and other factors were not present, they generally said that EPA should not revise the effluent guidelines for that industry. In such instances, state officials thought that EPA should not revise the guidelines 100 percent of the time (5 of 5 cases). This scenario is illustrated by the branch of the decision tree in the third column from the left. Corresponding to this decision tree, we further examined the data to identify specific industrial categories that presented the strongest evidence for needing to be revised. Because the significance of risk and the presence of technology are the two primary decision-making factors, we selected the 30 cases for which states said these two factors were present and for which they said effluent guidelines should be revised. These cases fall into the far left branch of the decision tree in figure 5. These 30 cases represent 14 industrial categories: canned and preserved seafood processing; cement manufacturing; coal mining; fertilizer manufacturing; meat and poultry products; metal finishing; metal molding and casting; oil and gas extraction; ore mining and dressing; petroleum refining; pulp, paper, and paperboard; steam electric power generation; sugar processing; and timber products processing. We added industries that state officials cited in the second section of our survey, in which we asked them to identify industries that were not among the top five dischargers in their state. This addition lengthened the list by 22 cases, representing 7 additional industrial categories: centralized waste treatment, dairy products processing, electrical and electronic components, electroplating, grain mills manufacturing, landfills, and pharmaceutical manufacturing. In total, therefore, we identified 52 cases representing 21 industrial categories for which state officials thought effluent guidelines should be revised. Of these 52 cases, 39 represent industrial categories whose guidelines EPA is not already revising. EPA has promulgated effluent guidelines for 58 industrial categories beginning in the mid-1970s. EPA has also revised the guidelines for most of those industries, although many have not been revised in recent years. As described elsewhere in this report, EPA uses a screening process to determine which categories may warrant further review and possible revision. According to our analysis, since EPA began using its current screening process in 2003, more than half the industrial categories with effluent guidelines did not advance beyond the screening phase in any year from 2003 to 2010 because, during a given 2-year screening cycle, the relative toxicity of their pollutant discharges did not put them among the top 95 percent of discharge hazard. Table 7 provides further information on the industrial categories, including the year their effluent guidelines were first promulgated, the year the guidelines were most recently revised, and the year(s) in 2004 through 2010 when their hazard ranking scores came within the top 95 percent. In addition to the individual named above, Susan Iott (Assistant Director), Elizabeth Beardsley, Mark Braza, Ross Campbell, Ellen W. Chu, Heather Dowey, Catherine M. Hurley, Paul Kazemersky, Kelly Rubin, Carol Hernstadt Shulman, and Kiki Theodoropoulos made significant contributions to this report. Wyatt R. Hundrup, Michael L. Krafve, Armetha Liles, and Jeffrey R. Rueckhaus also made important contributions to this report.
Under the Clean Water Act, EPA has made significant progress in reducing wastewater pollution from industrial facilities. EPA currently regulates 58 industrial categories, such as petroleum refining, fertilizer manufacturing, and coal mining, with technology-based regulations called effluent guidelines. Such guidelines are applied in permits to limit the pollutants that facilities may discharge. The Clean Water Act also calls for EPA to revise the guidelines when appropriate. EPA has done so, for example, to reflect advances in treatment technology or changes in industries. GAO was asked to examine (1) the process EPA follows to screen and review industrial categories potentially needing new or revised guidelines and the results of that process from 2003 through 2010; (2) limitations to this process, if any, that could hinder EPA’s effectiveness in advancing the goals of the Clean Water Act; and (3) EPA’s actions to address any such limitations. GAO analyzed the results of EPA’s screening and review process from 2003 through 2010, surveyed state officials, and interviewed EPA officials and experts to obtain their views on EPA’s process and its results. The Environmental Protection Agency (EPA) uses a two-phase process to identify industrial categories potentially needing new or revised effluent guidelines to help reduce their pollutant discharges. EPA’s 2002 draft Strategy for National Clean Water Industrial Regulations was the foundation for EPA’s process. In the first, or “screening,” phase, EPA uses data from two EPA databases to rank industrial categories according to the total toxicity of their wastewater. Using this ranking, public comments, and other considerations, EPA has identified relatively few industrial categories posing the highest hazard for the next, or “further review,” phase. In this further review phase, EPA evaluates the categories to identify those that are appropriate for new or revised guidelines because treatment technologies are available to reduce pollutant discharges. Since 2003, EPA has regularly screened the 58 categories for which it has issued effluent guidelines, as well as some potential new industrial categories, and it has identified 12 categories for its further review phase. Of these 12 categories, EPA selected 3 for updated or new effluent guidelines. EPA chose not to set new guidelines for the others. Limitations in EPA’s screening phase may have led it to overlook some industrial categories that warrant further review for new or revised effluent guidelines. Specifically, EPA has relied on limited hazard data that may have affected its ranking of industrial categories. Further, during its screening phase, EPA has not considered the availability of advanced treatment technologies for most industrial categories. Although its 2002 draft strategy recognized the importance of technology data, EPA has stated that such data were too difficult to obtain during the screening phase and, instead, considers them for the few categories that reach further review. Officials responsible for state water quality programs and experts on industrial discharges, however, identified categories they believe EPA should examine for new or updated guidelines to reflect changes in their industrial processes and treatment technology capabilities. According to some experts, consideration of treatment technologies is especially important for older effluent guidelines because changes are more likely to have occurred in either the industrial categories or the treatment technologies, making it possible that new, more advanced treatment technologies are available. Recognizing the limitations of its hazard data and overall screening approach, EPA has begun revising its process but has not assessed other possible sources of information it could use to improve the screening phase. In 2012, EPA supplemented the hazard data used in screening with four new data sources. EPA is also developing a regulation that, through electronic reporting, will increase the completeness and accuracy of its hazard data. In 2011, EPA also began to obtain recent treatment technology literature. According to EPA, the agency will expand on this work in 2013. Nonetheless, EPA has not thoroughly examined other usable sources of information on treatment technology, nor has it reassessed the role such information should take in its screening process. Without a more thorough and integrated screening approach that both uses improved hazard data and considers information on treatment technology, EPA cannot be certain that the effluent guidelines program reflects advances in the treatment technologies used to reduce pollutants in wastewater. GAO is making recommendations to improve the effectiveness of EPA’s effluent guidelines program by expanding its screening phase to better assess hazards and advances in treatment technology. EPA agreed with two recommendations in principle and said it is making progress on them, but said that one is not workable given current agency resources. GAO believes improvements can be made.
The current schedule for the full implementation of the Full Service program has been delayed by almost 10 months, and key functionality that was originally intended to be delivered in the program has been deferred indefinitely. In addition, the life-cycle cost estimate that program officials prepared does not capture all the costs associated with the acquisition and implementation of the program. As a result, program officials lack an accurate total cost estimate. Moreover, the first deployed release is experiencing performance issues. While the Full Service program has implemented initial acquisition management activities, it does not have the full set of capabilities needed to fully manage the acquisition. A key cause of the program’s acquisition management weaknesses in the areas of project planning, risk management, and product integration is that USPS organizational policies do not set forth sufficient requirements for establishing effective practices in these areas. Weaknesses exist in the program monitoring and control area because the program management contract creates a conflict of interest by requiring that the contractor assess the quality of its own deliverables and oversee the program’s schedule, issues, and risks. While organizational policies exist for requirements development and management, weaknesses exist in this area, in part, because USPS decided not to follow USPS’s organizational policies for system acquisition and instead followed a truncated program management approach in an effort to deliver the system in a compressed time frame. Without these processes in place, USPS increases the risk that this project will continue to encounter problems in meeting its performance, schedule, and cost objectives. Given that release 2 is expected to be implemented by the end of November 2009 and decisions about future releases need to be made, having the key elements of a sound acquisition management capability in place will be crucial to the program’s success in meeting its goal. To ensure that USPS adequately manages the acquisition of the Intelligent Mail® Full Service program, we recommend that the Postmaster General take seven actions. Specifically, we recommend that the Postmaster General direct the Chief Information Officer and Senior Vice President of Intelligent Mail and Address Quality to: Develop a comprehensive cost estimate to include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. Complete an overall program plan for the entire Full Service program, including an overview of the program’s scope of all releases, deliverables and functionality within these releases, plans to phase out the approximately 30 barcodes currently being utilized, assumptions and constraints, roles and responsibilities, staffing and training plans, and the strategy for maintaining the plan. Reconsider the current contract arrangement to avoid having the contractor evaluate its own performance. Define the core set of requirements for the entire program and use them as a basis for developing a reliable cost estimate. Develop a risk management process that enables the program officials to develop an adequate risk management plan that fully addresses the scope of their risk management efforts; ensures that a comprehensive list of risks and complete mitigation plans are identified and tracked; and includes milestones, mitigating actions, thresholds, and resources for significant risks. Develop and maintain a systems integration plan for release 2 and beyond. We are also recommending that the Postmaster General direct USPS’s Chief Information Officer to include in USPS’s Technical Solution Life Cycle policy guidance for programs to develop (1) complete program plans that define overall budget and schedule, key deliverables and milestones, assumptions and constraints, description and assignment of roles and responsibilities, staffing and training plans, and an approach for maintaining these plans; (2) specific requirements for programs to establish a robust risk management process that identifies potential problems before they occur, such as requiring programs to develop a risk management plan; and (3) system integration plan that include all systems to be integrated with the system, roles and responsibilities for all relevant participants, the sequence and schedule for every integration step, and how integration problems are to be documented and resolved. We obtained written comments on a draft of this report from the USPS Senior Vice President of Intelligent Mail and Address Quality, which are reprinted in appendix II. USPS agreed with three of our recommendations, disagreed with three, and did not comment on one. Specifically, USPS agreed that (1) the current contract arrangement should be reconsidered to avoid having the contractor evaluate its own performance, (2) a comprehensive risk management process should be developed, and (3) a system integration plan for release 2 and beyond should be developed and maintained. The agency further stated that it has and will continue to enable these capabilities. In previously commenting on our briefing slides, USPS disagreed with aspects of our findings on these issues or provided additional information that we incorporated as appropriate. USPS’s subsequent written comments on this draft report, which recognize the need to implement these recommendations, provide greater assurance of program success. The Senior Vice President stated that the disagreement with three of our recommendations may be the result of our use of the 2003 Intelligent Mail® strategy document to measure the program’s performance. However, we reviewed and analyzed many documents to form the basis of our findings and conclusions on the program’s performance. The 2003 strategy was just one of the many documents we used, as it represented the original baseline and justification for the program. To report on the progress of the program since its inception, we measured the program against original plans, while acknowledging that USPS has made multiple modifications to the implementation dates. USPS also stated that we relied on the 2003 strategy to determine delays to the program. This comment is inaccurate. As we stated in this report, we relied on the January 2008 Intelligent Mail® Advance Notice of Proposed Rulemaking in the Federal Register to identify the originally proposed implementation time frame. We also reported the subsequent revisions that USPS made to the program’s implementation schedule. The Senior Vice President disagreed with our recommendation to develop a comprehensive cost estimate. He stated such an activity would consume a significant amount of funding, time, and resources, while providing little or no value. However, as stated in our GAO Cost Estimating and Assessment Guide, developing a realistic cost estimate is essential because it enables program officials to evaluate resource requirements at key decision points, develop performance measurement baselines, and establish effective resource allocations. Additionally, cost estimates should be comprehensive and should include both government and contractor costs throughout the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. While we acknowledge that preparing a realistic cost estimate may require some amount of effort, we believe the benefits of having an accurate total cost estimate for the entire program to make better informed resource allocation decisions, clearly merit its completion. With regard to our recommendation to complete an overall program plan for the entire Full Service program, the Senior Vice President stated that, while USPS plans to start updating the Intelligent Mail® strategy on an annual basis, it plans to remain focused on its clearly defined actions for the current releases, rather than planning for future releases. However, industry best practices specifically state that a project plan is the essential document used to manage and control the execution of a project. In order for the project plan to be an effective and useful document, it should consider all phases of the project’s life cycle. Program officials should also ensure that all plans affecting the project are consistent with the overall project plan to ensure that all releases and associated functionality seamlessly fit together. As we state in our report, without such a plan that describes the full scope of the program, including how many releases are envisioned, USPS lacks an overarching approach for incorporating future releases into the program. Additionally, without this information, USPS may not be able to ensure the program is accomplishing its complete set of goals within the specified cost and schedule objectives. Regarding our recommendation to define a core set of requirements for the entire program and use them to develop reliable cost estimates, the Senior Vice President stated that the program must remain dynamic and that any attempt to define the entire program and its associated cost is a waste of funding and resources. We are not recommending that USPS define all detailed system-level requirements at the outset of the program; rather, we are recommending that USPS develop a roadmap of the program’s high-level requirements. As we state in this report, without a core set of high-level requirements, it will be difficult for USPS to focus appropriately on the next release and to hold itself accountable for delivering a system that meets USPS’s and mailers’ needs. Defining these requirements is especially important given the functionality that is being deferred in the first two releases. USPS program officials did not state whether they agreed or disagreed with our recommendation that USPS include in its Technical Solution Life Cycle policy guidance for programs to develop (1) complete program plans, (2) specific requirements for programs to establish a robust risk management process, and (3) a system integration plan. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to interested congressional committees and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices on Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The U.S. Postal Service (USPS) relies heavily on information technology (IT) to support its mission of providing prompt, reliable, and efficient mail service to all areas of the country. Starting in May 2009, as part of a program referred to as the Intelligent Mail® program, USPS began to encourage commercial mailers to use new standardized barcodes which are intended to make it easier to track and provide information about the mail’s progress as it flows through the mail stream. According to USPS officials, this information is important to their efforts to improve efficiency and reduce costs. The Intelligent Mail® program encompasses numerous programs, including a major initiative known as the Full Service program. This initiative is intended to build a system that improves the visibility into end-to-end mail processing operations through the use of new barcodes, gather more comprehensive and detailed service performance information and measure it against established performance standards, and create efficiencies by streamlining and automating certain aspects of the process USPS uses to verify mail from commercial mailers. Commercil miler inclde business, orgniztion, nd other prtie thend nd rely on mil to mintin contct with their ustomer. The commercil miler o encompassil preprer, inclding printer nd business thend or receive mil on ehlf of third prty. A of 2008, thee miler cconted for 86 percent of ll mil processed y USPS. USPS is planning to implement the program in multiple software releases—thus far it has committed to implementing two releases: the first one was deployed in May 2009 and the other is planned to be implemented by November 2009. Program officials have recently stated that they also plan to have future releases; however, they have not made any commitments to do so or obtained funding approval. USPS says the Full Service program is one of the most complex programs it has undertaken—it will involve the integration of approximately 30 different systems and is intended to benefit both commercial mailers and USPS. As agreed, our objectives were to determine the current status and plans for the Intelligent Mail® Full Service program and if the Postal Service has capabilities in place to successfully acquire and manage the Intelligent Mail® Full Service program. For our first objective, we analyzed system documentation, including plans, status reports, meeting minutes, cost estimates, schedule estimates, reports on program management reviews, test plans, and other acquisition-related documents. We also compared the cost and schedule estimates to actual cost and schedule information. In addition, we compared contract deliverables to the actual milestones and deliverables achieved. Finally, we interviewed Postal Service officials and reviewed our previous reports and Inspector General reports to determine the program’s status and plans. 5 Objectives, Scope, and Methodology For our second objective, we identified widely recognized industry standards for good acquisition and development practices, including processes defined in the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition (CMMI- ACQ) and for Development (CMMI-DEV). From this guidance we identified the following process areas as being the most relevant to our review: (1) project planning, (2) project monitoring and control, (3) requirements development and management, (4) risk management, and (5) product integration. We compared USPS documentation, such as organizational policies, contract information, status reports, meeting minutes, requirements for the program, process documentation, and risk information to SEI’s guidance on sound IT systems acquisition and management practices in the five process areas. We also interviewed Postal Service officials about these key process areas to help us understand whether the agency has the capabilities in place to successfully acquire and manage the program. rnegie Mellon Softwre Engineering Intitte, Cability Mrity Model® Integrtion for Acqition (CMMI-ACQ), Verion 1.2 (Novemer 2007) nd Crnegie Mellon Softwre Engineering Intitte, Cability Mrity Model® Integrtion for Development (CMMI-DEV) Verion 1.2 (Aust 2006). 6 Objectives, Scope, and Methodology We conducted this performance audit from February 2009 to August 2009 at United States Postal Service headquarters in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 7 Although USPS officials originally intended to deliver the entire Full Service program by January 2009, they currently plan to deliver the program in multiple releases—the first of two planned releases of the Full Service program was deployed on May 18, 2009. The second release is expected to be implemented by the end of November 2009. Additionally, key functionality that was originally intended to be delivered in these two planned releases has been deferred, including automating aspects of the mail acceptance process. Program officials have recently stated that they plan to have future releases to incorporate the deferred functionality; however, they have not made any commitments to do so or obtained funding approval. Program officials estimate that the life cycle cost of the program is $116.4 million, of which $65.9 million has been spent as of June 3, 2009. However, the life cycle cost estimate that program officials prepared does not capture all the costs associated with the acquisition and implementation of the program, such as costs to integrate approximately 30 systems with the Full Service program. Moreover, the first deployed release is currently experiencing operational problems (e.g., applying inconsistent charges for certain mail pieces). Therefore, program officials are developing patches to resolve these issues and to implement system enhancements. 8 While the Full Service program has implemented initial acquisition management activities, it does not have the full set of capabilities it needs to fully manage the acquisition. A key cause of the program’s immature management approach in the areas of project planning, risk management, and product integration is that USPS organizational policies do not set forth sufficient requirements for establishing effective practices in these areas. Weaknesses exist in the program monitoring and control area because the program management contract itself creates a conflict of interest by requiring the contractor to assess the quality of its own deliverables, and oversee the program’s schedule, issues, and risks. Although USPS officials have told us they use strategies to avoid potential conflicts, such as developing a separate program management team from the system development team, they have not provided us with evidence that they have a formal mitigation plan in place to address the conflict that exists. While organizational policies exist for requirements development and management, weaknesses exist in this area, in part, because USPS decided not to follow its organizational policies for system acquisition and instead followed a truncated program management approach in an effort to deliver the system in a compressed timeframe. Until USPS fully implements these key acquisition management processes, the Intelligent Mail® Full Service program is at risk of continuing to encounter problems in meeting its performance, schedule, and cost objectives. 10 We are recommending that the Postmaster General direct USPS’s Chief Information Officer and Senior Vice President of Intelligent Mail and Address Quality to take the following actions to improve the management of its acquisition capabilities: (1) develop a comprehensive cost estimate; (2) complete a program plan for the entire Full Service program; (3) reconsider the current contract arrangement to avoid having the contractor review its own performance; (4) define requirements for the entire program and use them as a basis for developing a reliable cost estimate; (5) develop a robust risk management process; and (6) develop and maintain a systems integration plan for release 2 and beyond. We are also recommending that the Postmaster General direct USPS’s Chief Information Officer to include in USPS’s Technical Solution Life Cycle policy guidance for programs to develop (1) complete program plans; (2) specific requirements for programs to establish a robust risk management process; and (3) system integration plan. 11 In e-mail comments on a draft of these briefing slides, the Senior Vice President of Intelligent Mail and Address Quality did not state whether he agreed or disagreed with our recommendation to develop a comprehensive cost estimate for the program. He disagreed with our findings and conclusions regarding the program’s acquisition management capabilities. Specifically, With regard to project planning, the Senior Vice President stated that federal best practices do not reflect the dynamic environment that drives the scope, requirements, and schedule of future releases of the Full Service program. We disagree, as industry best practices call for an overarching plan that describes a program’s full scope in order to ensure that it is accomplishing its goals. He further disagreed with our findings on requirements development and management, stating that there is not enough funding to define requirements for the full program. Without a core set of high-level requirements, however, USPS will face challenges in focusing on the next release and holding itself accountable to users’ needs. Regarding risk management, the Senior Vice President noted that the program has a risk manager and a risk management process in place. However, we found that several key risks were not included in risk reports, complete mitigation plans were not developed, and they did not have a comprehensive risk management plan. Regarding product integration, the Senior Vice President stated that the program provided us with a systems integration plan. However, these documents defined testing strategies, not a comprehensive system integration plan. Finally, USPS’s program officials did not state whether they agreed or disagreed with our recommendations that USPS modify its current policy to provide guidance for USPS programs to, among other things, develop complete program plans. 13 Since the 1970s, the use of barcodes and automation has improved efficiency in USPS mail processing operations. Commercial mailers have been encouraged to use barcodes through pricing incentives, allowing USPS to cut costs and increase efficiency in its mail processing operations. In particular, automated mail processing machines can sort mail with barcodes containing delivery information faster than manual sorting. Over the past three decades, the number and type of barcodes increased along with technology changes, and in 2003 USPS estimated that there were more than 30 different barcodes in use. Two of the most commonly used barcodes are the following: POSTNET, which contains delivery information that enables automated sorting of the mail to the carrier’s route level. Mailers receive a postage discount when they print POSTNET barcodes on their mail. PLANET, which is a barcode that contains identification numbers to enable tracking mail in USPS’s mail processing system but contains less information than the new Intelligent Mail® barcode. 14 According to USPS, the use and maintenance of numerous barcodes have become increasingly burdensome. For example, whenever USPS adds or upgrades its mail processing equipment, it has to ensure that the equipment remains compatible with each of the relevant barcodes. Additionally, printing numerous barcodes on mail pieces clutters the pieces, thus reducing the “real estate” that mailers have to advertise or print other information on their envelopes (see fig. 1). In 2003, USPS initiated the Intelligent Mail® program, which is intended to use information-rich standardized barcodes to track mail and thus provide USPS and mailers with better and timelier information about the mail. Figure 2 illustrates the components of the Intelligent Mail® barcode. USPS has identified several ways it expects the implementation of Intelligent Mail® to benefit USPS and mailers: Improve efficiency, reduce costs, and improve timeliness of delivery. USPS says it will be able to use information from Intelligent Mail® to improve its processing system. Also, USPS plans to use Intelligent Mail® to create efficiencies by streamlining and automating the process it uses to accept mail from commercial mailers, which is currently time- and labor-intensive. Reduce the amount of mail that must be forwarded, which can involve extra handling by USPS and delays in delivery. USPS will provide free notification when intended recipients have moved and filed a change of address with USPS. Mailers previously had to pay for this service. This feature, known as the Address Correction Service, could help USPS meet its goal of reducing the amount of mail that cannot be delivered. Provide better service to mailers. Through Intelligent Mail®, USPS plans to provide better service to mailers through real-time feedback. Also, since mail will be uniquely identified, USPS anticipates having the ability to isolate and give special handling to a specific mail piece, which creates an opportunity for USPS to offer mailers new products and services. Financial incentives. USPS is also offering a financial incentive to mailers. Specifically, those who adopt Full Service Intelligent Mail® will receive a postage discount, in addition to other worksharing discounts. Service performance measurement capability. Intelligent Mail® is expected to allow USPS to gather more comprehensive and detailed service performance information and measure it against established performance standards, which is intended to help keep USPS accountable to its stakeholders. This feature was also intended to enable USPS to meet requirements in the Postal Accountability and Enhancement Act of 2006. concept, known as workring, generlly involveilerualifying for redced poge rte y performing certin ctivitie such as prepring nd barcoding mil o it corted y USPS automted eqipment. The Full Service Program Directors—the Senior Vice President of Intelligent Mail and Address Quality and the Chief Information Officer—head the program. Their responsibilities include reviewing deliverables and conducting governance meetings that focus on the status of the program, issues, and risks. The Program Management Office activities are performed by a contractor, Accenture, who reports directly to the Program Directors. The responsibilities for the contractor include program status reporting, communications management, scope and release management, issue and risk management, and selective quality deliverable audits. The Marketing Technology and Channel Management group is the business organization responsible for business mail acceptance process re-engineering and field deployment activities, including field preparedness, developing test plans, newsletters, and training and awareness. The Sales and Marketing Portfolio is the IT organization responsible for development, integration, and systems deployment activities. ct, the gency nd the contrctor gree on price nd the contrctor assumell reponility for ll co nd the resulting profit or loss. In May 2009, we issued a report that described the Intelligent Mail® program and stated that key management actions were not taken, such as developing a comprehensive strategic plan; preparing information about the program’s costs, including its anticipated savings or cost reductions; and establishing a risk mitigation plan. In addition, we highlighted commercial mailers’ concerns about the implementation of the program. Specifically, the mailers stated that USPS communication efforts were insufficient; USPS and mailers may not be ready for implementation given USPS’s short time frame to simultaneously design, develop, test, and implement the Intelligent Mail® program; and the program’s pricing and benefits may not be sufficient to encourage mailers to participate. As such, we recommended that USPS develop a comprehensive Intelligent Mail® strategic plan, as well as develop a plan that addresses how USPS will mitigate program- level risks. In its response to our recommendations, USPS agreed to develop a comprehensive Intelligent Mail® strategy, including all planned phases and the associated functions and systems, program goals, and measures of success; and a plan that addresses how it will mitigate risks. y 2009). 26 Objective 1: Full Service Program Status and Plans Program officials have completed key activities for implementing the Full Service program, including deploying the first release on May 18, 2009, and beginning activities for release 2. However, while USPS officials originally intended to deliver the entire Full Service program by January 2009, they currently plan to deliver the program in multiple releases—the first of two planned releases of the program was deployed on May 18, 2009. The second release is expected to be implemented by the end of November 2009. Therefore, full implementation of the program has been delayed by almost 10 months. Additionally, key functionality that was originally intended to be delivered in these two planned releases has been deferred, including automating aspects of the mail acceptance process. Program officials have recently stated that they plan to have future releases to incorporate the deferred functionality; however, they have not made any commitments to do so. In addition, the life cycle cost that program officials prepared does not capture all the costs associated with the acquisition and implementation of the program, such as costs to integrate several USPS systems. Moreover, the first deployed release is currently experiencing operational problems, thus requiring program officials to develop patches to resolve the issues. 27 Objective 1: Full Service Program Status and Plans Current Implementation Status Although USPS officials originally intended to deliver the entire Full Service program by January 2009, they currently plan to deliver the program in multiple releases. The current implementation schedule for the Full Service program is as follows: May 11, 2009. USPS internally implemented the first release which enables certain functions, such as the Address Correction Service and electronic documentation. May 18, 2009 and beyond. Mailers began testing their systems’ ability to access and electronically transmit documentation to USPS’s system. November 29, 2009. USPS plans to deploy the second release of the Full Service program and expects to begin offering price incentives to mailers that utilized the Full Service program. No commitments have been made by program officials for future releases. m offici nnonced on Aust 12, 2009, tht they re iming to develop third releasy Mrch 12, 2010, progrm offici indicted tht they hve not oined fnding to implement thi release or fre releas. 28 Objective 1: Full Service Program Status and Plans Current Implementation Status By May 2011, the use of POSTNET and PLANET barcodes will be phased out and mailers seeking reduced automation-postage rates will be required to use Intelligent Mail® barcodes. Based on the current revised schedule, table 3 summarizes the key functionality by releases. As of June 3, 2009, $65.9 million had been spent on the acquisition and implementation of the first release and the development of requirements for the second release. 30 Objective 1: Full Service Program Status and Plans Implementation Has Been Delayed The current implementation schedule represents a significant delay from its original and revised implementation dates. Specifically, In January 2008, USPS published the Intelligent Mail® Advance Notice of Proposed Rulemaking in the Federal Register, which originally proposed implementing all functionality of the program by January 2009. In April 2008, USPS issued a revised Intelligent Mail® Federal Register notice which pushed back the implementation date to May 2009. This was due to several concerns by the mailers, such as the compressed time period in which USPS planned to simultaneously design, test, and implement the program. Mailers were also concerned that they had not been provided with finalized IT requirements. 31 Objective 1: Full Service Program Status and Plans Implementation Has Been Delayed Subsequently, in November 2008, program officials planned to incrementally deliver functionality in multiple releases and delay full implementation further. Specially, they committed to delivering three releases—the first in May 2009, a second in September 2009, and a third in November 2009. According to USPS officials, the schedule was revised to accommodate the implementation of the Intelligent Mail® barcode and to allow mailers more time to make appropriate modifications to their systems and processes. However, in January 2009, program officials again revised its schedule: they planned to deliver select functionality in a release in May 2009 and additional functionality in another release in November 2009. According to program officials, they decided that delivering three releases in such a short time frame was too ambitious. See figure 4 for a summary of the Full Service program’s original and revised implementation schedule as of June 2009. 32 Objective 1: Full Service Program Status and Plans Implementation Has Been Delayed Objective 1: Full Service Program Status and Plans Functionality Has Been Deferred In addition to implementation delays, key functionality that was originally intended to be delivered in the two planned releases of the Full Service program has been deferred. Specifically, despite the fact that automating several aspects of the business mail verification process was one of the key justifications for the Full Service program, this function is not going to be delivered in the two releases. Additionally, USPS recently announced, in August 2009, that enabling the ability to better measure and report USPS’s service performance is no longer going to be delivered in the second release, as planned. According to USPS officials, service performance measurement functionality has been deferred because it was taking longer than planned to implement, and they wanted to be able to deliver other promised functionality in release 2 by November 29, 2009. This is especially problematic since USPS is legislatively required to develop a system to better measure and report its service performance to the Postal Regulatory Commission, and the Full Service program was the vehicle USPS planned to use to meet that mandate. l Accontability nd Enhncement Act reqired USPS to develop tem to measure nd report ervice performnce to the Pol Regtory Commission. 35 Objective 1: Full Service Program Status and Plans Functionality Has Been Deferred While program officials announced on August 12, 2009, that they are aiming to develop a third release by March 12, 2010, program officials indicated that they have not obtained funding to implement this release or future releases. According to program officials, one of the primary reasons for not moving forward with such decisions is that funding for future releases may not be available as a result of USPS’s current financial situation. We recently reported that amid challenging economic conditions and a changing business environment, USPS is facing a deteriorating financial situation in which it does not expect to cover its expenses and financial obligations in fiscal years 2009 and 2010. As a result, we added the financial condition of USPS to our high-risk list of federal areas in need of transformation. tmaster Generl tetified in Juary 2009 tht USPSascing potentil net loss of $6 illion or more for fil yer 2009. He noted tht USPS nticipted contined deteriortion de to the economic lowdown, as the finncil, credit, nd housing ector re mong it key business driver. ly 2009). 36 Objective 1: Full Service Program Status and Plans Program Life Cycle Cost Was Not Completely Defined According to industry best practices, programs must maintain current and well- documented cost estimates, and these estimates must encompass the full life cycle of the program. Specifically, as stated in the GAO Cost Estimating and Assessment Guide, cost estimates should be comprehensive in that they should include both government and contractor costs throughout the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. According to the business case, the life cycle cost estimate of the Full Service program is $116.4 million. This includes the costs to develop the custom software capabilities, necessary hardware to support the software capabilities, and operating and maintenance cost. However, the life cycle cost estimate excludes key costs associated with the acquisition and implementation of the Full Service program. For example, the estimate does not include costs related to the integration of the systems or the cost of future releases beyond release 2. rch 2009). 37 Objective 1: Full Service Program Status and Plans Program Life Cycle Cost Was Not Completely Defined Additionally, the life cycle cost has not been updated to reflect the significant changes that have been made to the program. While a revised business case to reflect the modified schedule and scope for the program was approved in June 2009, program officials did not update the life cycle cost of the program. According to program officials, they did not include all of the costs associated with integrating the system because they did not regard the costs to be significant enough to include. Additionally, officials stated that they did not include the costs of the future releases because they are uncertain if they are going to be able to deliver those releases. certain mailers and mail pieces were being incorrectly charged, the system was not recognizing certain zone values, the system was preventing mailers from putting Intelligent Mail® barcodes on certain reporting functions were not working as intended, system was not allowing mailers to enter certain information on individual mail after logging into account and clicking through the available links, mailers were receiving an error message when they tried to return to the homepage. system is incorrectly creating a finalized postage statement for mailers who canceled or updated a job, system does not accept certain updates after postage statements are final and ready inconsistent charges are being applied to a secured group of mail pieces, and certain electronic documentation is not being transmitted through the system. 40 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed USPS Lacks Key Management Capabilities Essential to Effectively Acquire and Manage the Full Service Program USPS is in the process of implementing key acquisition management controls, but it has yet to implement the full set of controls essential for acquiring and managing the Full Service program in a disciplined and rigorous manner. Specifically, it has not implemented certain process controls in the areas of project planning, project monitoring and control, requirements development and management, risk management, and product integration. The primary cause of the program’s immature management approach in the areas of project planning, risk management, and product integration is that USPS organizational policies do not set forth sufficient requirements for establishing effective practices in these areas. While organizational policies exist for requirements development and management, weaknesses exist in this area in part because USPS decided not to follow its organizational policies for system acquisition and instead took a 41 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed truncated program management approach in an effort to deliver the system in a compressed timeframe. Until USPS implements the full set of controls essential to effectively managing the program, it increases the risk that the Full Service program will continue to encounter problems in meeting its performance, schedule, and cost objectives. 42 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed As we have previously reported, to effectively manage major IT programs, organizations must use sound acquisition and management processes to minimize risks and thereby maximize chances for success. Such processes have been identified by leading organizations such as the Software Engineering Institute, the Chief Information Officer’s Council, and in our prior work analyzing best practices in industry and government. In particular, the CMMI-ACQ and CMMI-DEV have defined a suite of key acquisition process control areas that are necessary to manage system acquisitions in a rigorous and disciplined fashion. These process areas include project planning, project monitoring and control, requirements development and management, risk management, and product integration. mple, GAO, Information Technology: Management Improvements Needed on Immigration and Customs Enforcement’s Infrastructure Modernization Program, GAO-05-805 (Washington, D.C.: Septemer 7, 2005) nd Census Bureau: Important Activities for Improving Management of Key 2010 Decennial Acquisitions Remain to be Done, GAO-06-444T (Washington, D.C.: Mrch 1, 2006). rnegie Mellon Softwre Engineering Intitte, Cability Mrity Model® Integrtion for Acqition (CMMI-ACQ), Verion 1.2 (Novemer 2007). rnegie Mellon Softwre Engineering Intitte, Cability Mrity Model® Integrtion for Development (CMMI-DEV), Verion 1.2 (Aust 2006). Project Planning Effective project planning involves establishing and maintaining plans that define project scope and activities, including overall budget and schedule, key deliverables and milestones for key deliverables, assumptions and constraints, description and assignment of roles and responsibilities, staffing and training plans, and an approach for maintaining these plans. It also involves obtaining stakeholder commitment to the project plan. The Full Service program officials have established a program office for the Full Service program and assigned a USPS hired a contractor to carry out program management activities including tracking schedule, issues, and risks for the program; identified the tasks and organizational roles and responsibilities for release 1; developed a program plan for release 2 that identifies key deliverables and milestones for these deliverables; and developed a business case for the Full Service program. ion 1.2 (Novemer 2007) nd The Intitte of Electricnd Electronic Engineer, IEEE Sndrd for Softwre Life Cycle Process—Project Mgement Pl, IEEE Sndrd 1058-1998 (Decemer 8, 1998). 44 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Project Planning (continued) While officials have developed a business case for the Full Service program and a program plan for release 2, there still is no comprehensive program plan that includes the full scope of the program, including how many releases are planned and the specific functions and systems to be implemented in each release; its plans to standardize and consolidate the over 30 barcodes currently being used; assumptions and constraints about the program; a description and assignment of roles and responsibilities; staffing and training plans; and the strategy for maintaining the program plan. In addition, program officials have not yet obtained commitment from internal and external stakeholders on the program plan for release 2. Such a plan is often used to form a baseline for the program and to obtain buy-in from stakeholders. A key reason that these activities have not been completed is that USPS’s policy that outlines the steps that programs should follow when developing, acquiring, enhancing, and/or maintaining IT systems—referred to as the Technical Solution Life Cycle policy— does not require that officials develop a comprehensive plan for their programs. 45 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Project Planning (continued) Until program officials develop a complete program plan that supports the Intelligent Mail® Strategic Plan, which we previously recommended, and includes the details on the full scope of the Full Service program, USPS may not be able to ensure that the program is moving in the right direction. Without this assurance, it is more likely to encounter unanticipated changes in direction—which could affect cost, schedule, and deliverables. Project Monitoring and Control Project monitoring and control involves providing oversight of the contractor’s and the project office’s performance, in order to allow appropriate corrective actions if actual performance deviates significantly from the plan. Key activities in tracking both the contractor’s and the project office’s performance include communicating status, taking corrective actions, and determining progress. In addition, organizations should have IT investment management boards comprised of key executives to regularly track the progress of major systems acquisitions. These boards should be able to adequately oversee the project’s progress toward cost and schedule milestones and its risks. The board should also employ early warning systems that enable it to take corrective actions at the first sign of cost, schedule, and performance slippages. With regard to project monitoring and control activities, program officials track the milestones and dependencies of the program; and review the activities, status, and results of the process with higher level program management, USPS senior executives representing both IT and business units, and the contractor. ion 1.2 (Novemer 2007). rch 2004). 47 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Project Monitoring and Control (continued) However, the main contractor performing the development and implementation functions of the Full Service program is also the contractor carrying out USPS’s program management activities. Specifically, according to the Program Management Office contract, the contractor is responsible for assessing the quality of program deliverables; overseeing the program’s schedule, issues, and risks; assessing the project plan’s critical path which is necessary for examining the effects of any activity slipping along this path; developing project status materials for USPS program officials, including bi-weekly detailed status reports to the program manager and weekly status reports to IT management and project teams; and participating in weekly deliverable reviews from other USPS internal and external suppliers, including documenting all meeting minutes, and action items. l pth i the longet dtion pth throgh the eqenced lit of key progrctivitie. 48 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed The roles that the contractor plays as both a manager of the Full Service program and as a supplier of products for the program create a conflict of interest because of the risk that the contractor will not evaluate its own products in a completely objective manner. USPS program officials stated that they do not think that this is an issue because the company’s program management staff work on a separate team from the system development staff and the two teams do not interact; however, this arrangement still requires the contractor to assess the quality of its own deliverables, and oversee the program’s schedule, issues, and risks. USPS officials have not provided us with evidence that they have a formal mitigation plan in place to address the conflict that exists under the contract. While we recognize that USPS is not required to comply with the Federal Acquisition Regulation (FAR), these regulations can be instructive since they are used by federal agencies for acquiring goods and services. According to the FAR, an underlying principle is that, in order to avoid a conflict of interest, a contractor should not have conflicting roles that might bias a contractor’s judgment. ) (“Acqition” defined). The FAR generlly pplie to cqitionde with pproprited fnd used to oin supplie or ervice for the federl government. ). 49 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Until program officials reconsider having the same contractor that is developing and implementing the system be responsible for helping USPS oversee the program, USPS will increase its risk of unexpected cost increases, schedule delays, and performance shortfalls. Requirements Development and Management Requirements development involves eliciting, analyzing, and validating customer and stakeholder needs and expectations. Requirements management involves establishing an agreed-upon set of requirements, ensuring traceability between operational and product requirements, and managing any changes to the requirements in collaboration with stakeholders. With regard to requirements development and management, program officials have defined the initial business requirements dated August 16, 2007, for release 1; defined requirements for release 2; and developed a change control process for managing changes to the requirements. ion 1.2 (Novemer 2007). 51 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Requirements Development and Management (continued) While USPS has defined the requirements for release 1 and release 2, it has not finalized or validated the core set of requirements for the Full Service program, which would include high-level requirements that USPS plans to deliver in future releases. These core requirements would need to be further defined as the program begins to focus on the next release. Program officials stated that they did not fully define the program’s requirements because the requirements are still evolving. Risk Management An effective risk management process identifies potential problems before they occur, so that risk- handling activities may be planned and invoked as needed across the life of the product and project in order to mitigate adverse impacts on achieving objectives. Key activities include assigning resources, identifying and analyzing risks, and developing risk mitigation plans and milestones for key mitigation deliverables. Additionally, a risk management strategy addresses the specific actions and management approach used to perform and control the risk management program. It also includes identifying and involving relevant stakeholders in the risk management process. With regard to risk management, program officials have assigned responsibility for managing the risks and identified and analyzed selected risks associated with schedule, performance, and testing. Examples of the selected program-level risks include o limited mailer adaptation and adoption can affect future Full Service releases, o program success measurements are not defined, o parallel program activities have caused resource constraints, o components of scope have not been planned for release 2, and o mailers require significantly more support than estimated to assist them with the implementation of the Intelligent Mail® barcode. ion 1.2 (Novemer 2007). Risk Management (continued) However, they did not adequately identify all risks. For example, While the USPS contracting officer indicated in the program management contract’s price negotiation memorandum that having the same company perform program management activities as well as development and implementation activities for the Full Service program is a major concern, program officials have not identified this as a risk or established a complete mitigation strategy. Program officials stated that they include the list of risks the system development contractor identifies as part of the program management office’s risk reports. However, as of July 16, 2009, there was no evidence in the reports that contractor risks were being identified or mitigated. While program officials are concurrently conducting activities for release 2 and unplanned post-deployment efforts for release 1, they have not identified potential schedule delays in release 2 as a risk or established a mitigation plan. 54 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Risk Management (continued) Moreover, as we have previously reported, USPS lacks a risk mitigation plan, and therefore we recommended that USPS develop a plan that addresses how it will mitigate program-level risks. Although USPS agreed with this recommendation, it has not yet developed complete risk mitigation plans. During this review we found that while USPS recently finalized a risk management plan for release 2, it is not comprehensive and does not fully address the scope of the risk management effort, including discussing techniques for risk mitigation, defining adequate risk sources and categories, and identifying and involving relevant stakeholders to promote commitment and understanding of the process. The program’s weaknesses in the risk management area are partly due to the fact that USPS’s Technical Solution Life Cycle policy does not set forth sufficient requirements regarding risk management. Product Integration The scope of this process area is to achieve complete product integration through progressive assembly of product components. A critical aspect of this area is the management of internal and external interfaces of the products and product components to ensure compatibility among the interfaces. Attention should be paid to interface management throughout the project. In addition, a systems integration plan should be developed to identify all systems to be integrated, define roles and responsibilities of all relevant participants, establish the sequence and schedule for every integration step, and describe how integration problems are to be documented and resolved. With regard to product integration, program officials have identified approximately 30 systems that will need to be integrated in the Full Service program. However, USPS officials have stated that the number of systems that need to be integrated could change, and they are not yet aware of which specific systems will need to be integrated in release 2 or in possible future releases. In addition, while program officials stated they have several program documents, such as testing strategies, they do not have a system integration plan, which is intended to support the deployment strategy and describe to key stakeholders in each integration step what needs to be done to effectively integrate the various systems. The program office also lacks documentation of the process associated with updating and maintaining the integration of the systems. ion 1.2 (Aust 2006). rtment of Trporttion, Federl Highwy Adminitrtion, Stem Engineering Gideook for ITS Verion 2, Juary 2, 2007. 56 Objective 2: Adequacy of Acquisition Management Capabilities Acquisition Management Capabilities Are Needed Part of the reason that these activities have not been completed is that USPS’s Technical Solution Life Cycle policy does not set forth sufficient requirements regarding product integration and does not require that programs develop system integration plan and associated documentation regarding updating and maintaining the integration of the systems. Until program officials develop these key product integration documents, USPS will be limited in its ability to ensure that the product is integrated, functioning properly, and delivered on time and within budget to the users. 57 The current schedule for the full implementation of the Full Service program has been delayed by almost 10 months, and key functionality that was originally intended to be delivered in the program has been deferred indefinitely. In addition, the life cycle cost estimate that program officials prepared does not capture all the costs associated with the acquisition and implementation of the program. As a result, program officials lack an accurate total cost estimate. Moreover, the first deployed release is experiencing performance issues. While organizational policies exist for requirements development and management, weaknesses exist in this area, in part, because USPS decided not to follow USPS’s organizational policies for system acquisition and instead followed a truncated program management approach in an effort to deliver the system in a compressed timeframe. Without these processes in place, USPS increases the risk that this project will continue to encounter problems in meeting its performance, schedule, and cost objectives. Given that release 2 is expected to be implemented by the end of November 2009 and decisions about future releases need to be made, having the key elements of a sound acquisition management capability in place will be crucial to the program’s success in meeting its goal. To ensure that USPS adequately manages the acquisition of the Intelligent Mail® Full Service program, we recommend that the Postmaster General direct the Chief Information Officer and Senior Vice President of Intelligent Mail and Address Quality to take the following six actions: Develop a comprehensive cost estimate to include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. Complete an overall program plan for the entire Full Service program, including an overview of the program’s scope of all releases, deliverables and functionality within these releases, plans to phase out the approximately 30 barcodes currently being utilized, assumptions and constraints, roles and responsibilities, staffing and training plans, and the strategy for maintaining the plan. Reconsider the current contract arrangement to avoid having the contractor evaluate its own performance. Define the core set of requirements for the entire program and use them as a basis for developing a reliable cost estimate. Develop a risk management process that enables the program officials to develop an adequate risk management plan that fully address the scope of their risk management efforts; ensures that a comprehensive list of risks and complete mitigation plans are identified and tracked; and includes milestones, mitigating actions, thresholds, and resources for significant risks. Develop and maintain a systems integration plan for release 2 and beyond. We are also recommending that the Postmaster General direct USPS’s Chief Information Officer to include in USPS’s Technical Solution Life Cycle policy, guidance for programs to develop (1) complete program plans that define overall budget and schedule, key deliverables and milestones, assumptions and constraints, description and assignment of roles and responsibilities, staffing and training plans, and an approach for maintaining these plans; (2) specific requirements for programs to establish a robust risk management process that identifies potential problems before they occur, such as requiring programs to develop a risk management plan; and (3) system integration plan that include all systems to be integrated with the system, roles and responsibilities for all relevant participants, the sequence and schedule for every integration step, and how integration problems are to be documented and resolved. 61 Agency Comments and Our Evaluation We received comments via e-mail from the Senior Vice President of Intelligent Mail and Address Quality on a draft of these briefing slides. He did not state whether he agreed or disagreed with our recommendation to develop a comprehensive cost estimate. The Senior Vice President disagreed with our findings and conclusions regarding the program’s acquisition management capabilities. With regard to project planning, he stated he disagreed with the static approach suggested in our briefing. He stated that the scope, requirements, and schedule of future releases are driven by a dynamic environment both internal and external to USPS. However, as we state in the briefing, industry best practices show that it is important to have an overarching program plan that describes the full scope of the program, including how many releases are planned, in order to ensure that the program is accomplishing its goals within the specified cost and schedule objectives. ident revied the gency’ poition on GAO’ even recommendtion in written commentted Octoer 15, 2009. See ppendix II. 62 Agency Comments and Our Evaluation Regarding project monitoring and control activities, the Senior Vice President provided additional clarifying information on executive-level oversight, which we have incorporated into the briefing as appropriate. He did not acknowledge that a conflict of interest exists. While the USPS contracting officer identified that having the same company perform program management activities as well as development and implementation activities for the Full Service program is a concern, the program office has not identified that this arrangement is a conflict of interest. In fact, the program office has not even identified this arrangement as a risk that it tracks in its risk tracking process. Despite this, program officials have explained actions they are taking to mitigate the potential risk of a conflict of interest, such as establishing separate teams for the program management staff and system development staff. However, they have not presented us with any evidence of a formal mitigation plan that is in place to address the actual conflict of interest that is introduced by the responsibilities that are specified in the program management office contract, which states that the contractor must assess the quality of deliverables and oversee the program’s schedule, issues, and risks. Unless USPS reconsiders the current contract arrangement to avoid having the contractor evaluate its own performance, there is an increased risk that the conflict of interest will negatively impact the program. 63 Agency Comments and Our Evaluation Regarding our findings with regard to requirements development and management activities, the Senior Vice President stated that he disagreed because there is not enough approved funding to define the requirements for the full program—they only received funding for a portion of the program. However, as we state in the briefing, without a core set of high-level requirements, it will be difficult for USPS to focus appropriately on the next release and to hold itself accountable to delivering a system that meets USPS's and mailers' needs. Defining these requirements is especially important given the functionality that is being deferred in the first two releases. Regarding risk management activities, he stated that the program has a defined, active, cross-program process and a risk manager who is responsible for managing this process. While we acknowledge that the program has developed a tracking process, which includes assigning responsibility for managing and identifying risks, several key risks, such as the risk for potential schedule delays in release 2 as a result of conducting concurrent activities for release 2 and release 1, were not included by the program office as part of their risk reports and complete mitigation plans were not developed. Additionally, the recently finalized risk management plan for release 2 is not comprehensive and does not fully address the scope of the risk management effort. Until USPS develops a strategy for ensuring a comprehensive 64 Agency Comments and Our Evaluation list of risks and that includes mitigation efforts, it increases the probability that unanticipated risks may occur that could have a critical impact on the program’s cost, schedule, and performance. The Senior Vice President disagreed with our finding on product integration activities. He stated that a systems integration plan for conducting product integration was provided to us. However, the documents provided to us included testing strategies and not a system integration plan, which is intended to support the deployment strategy and to describe to key stakeholders in each integration step what needs to be done to effectively integrate the various systems. Until these key product integration documents are developed, USPS will be limited in its ability to ensure that the product is integrated and functioning properly. Additionally, USPS program officials did not state whether they agreed or disagreed with our recommendation that USPS include in its Technical Solution Life Cycle policy, guidance for programs to develop (1) complete program plans; (2) specific requirements for programs to establish a robust risk management process; and (3) system integration plan. In addition to the individual named above, Shannin G. O’Neill, Assistant Director; Neil Doherty; Rebecca E. Eyler; Mary D. Fike; Franklin Jackson; Lee McCracken; Niti Tandon; Christy A. Tyson; and Adam Vodraska made key contributions to this report.
In 2003, the United States Postal Service (USPS) initiated the Intelligent Mail program, which is intended to use information-rich standardized barcodes to track mail and thus provide USPS and mailers with better and timely information. A major component of this program is the Full Service program, which, among other things, is intended to build a system that improves the visibility into end-to-end mail processing operations through the use of new barcodes, and create efficiencies by streamlining and automating certain aspects of the process. GAO was asked to determine (1) the current status and plans for the Intelligent Mail Full Service program and (2) if the Postal Service has capabilities in place to successfully acquire and manage the Intelligent Mail Full Service program. GAO obtained and analyzed USPS documentation, reviewed previous GAO reports, interviewed officials, and compared acquisition best practices with USPS's practices. Program officials have completed key activities for implementing the Intelligent Mail Full Service program, such as deploying the first phase of the program; however, the current schedule for the program has been delayed by almost 10 months. As a result, the second phase of the program is not expected to be implemented until the end of November 2009. In addition, key functions of the program that were originally intended to be delivered have been deferred. Moreover, the life-cycle cost that program officials prepared does not capture all the costs associated with the acquisition and implementation of the program. As a result, program officials lack an accurate total cost estimate. Finally, the first deployed phase is currently experiencing operational problems. While the Full Service program has taken steps to implement acquisition management activities, it does not have the full set of capabilities it needs to fully manage the acquisition. A key cause of the program's acquisition management weaknesses in the areas of project planning, risk management, and product integration is that USPS organizational policies do not set forth sufficient requirements for establishing effective practices in these areas. Weaknesses exist in the program monitoring and control area because the program management contract creates a conflict of interest by requiring that the contractor assess the quality of its own deliverables and oversee the program's schedule, issues, and risks. Without these management capabilities in place, USPS increases the risk that this program will continue to encounter problems in meeting its performance, schedule, and cost objectives.
OI work years devoted to investigating alien smuggling along the southwest border increased from about 190 work years in fiscal year 2005 to about 197 work years in fiscal year 2009, an overall increase of 4 percent, with hundreds of arrests, indictments, and convictions resulting. The overall number of work years decreased from about 190 work years in fiscal year 2005 to 174 in fiscal year 2008, but increased 23 work years from fiscal years 2008 to 2009 primarily due to an increase in one office. The percentage of time OI investigators spend on alien smuggling investigations, versus other investigative areas, such as drugs, has remained steady during this time period at 16–17 percent. DHS’s Human Capital Accountability Plan states that DHS is committed to ensuring that human capital resources are aligned with mission accomplishments and are deployed efficiently and effectively. However, in some cases OI investigators are conducting immigration-related activities that are not consistent with OI’s primary mission of conducting criminal investigations. Officials from two of the four SAC offices we visited told us that OI has been tasked to respond to calls from state and local law enforcement agencies to transport and process apprehended aliens who may be subject to removal, which diverts OI resources from conducting alien smuggling and other investigations. For example, according to officials in one SAC office, the equivalent of two full-time investigators each week spent their time responding to non-investigation-related calls during fiscal year 2009. In 2006, in the Phoenix metropolitan area, ICE’s DRO developed the Law Enforcement Agency Response (LEAR) program, in which DRO took over responsibility from OI for transporting and processing apprehended aliens. DRO processed 3,776 aliens from October 1, 2008, to May 24, 2009, who otherwise OI would have had to process, thus enabling OI agents to spend more time on investigations. DRO headquarters officials stated that they have discussed expanding the LEAR program beyond Phoenix but have yet to conduct an evaluation to identify the best locations for expanding the program. By studying the feasibility of expanding the LEAR program, and expanding the program if feasible, ICE would be in a better position to help ensure that its resources are more efficiently directed toward alien smuggling and other priority investigations. Therefore, in our May 2010 report, we recommended ICE take such action. ICE concurred with our recommendation and stated that as a first step in potentially expanding the program nationwide, DRO’s Criminal Alien Division prepared and submitted a resource allocation plan proposal for its fiscal year 2012 budget. The value of OI alien smuggling asset seizures has decreased since fiscal year 2005, and two promising opportunities exist that could be applied to target and seize the monetary assets of smuggling organizations. According to OI data, the value of alien smuggling seizures nationwide increased from about $11.2 million in fiscal year 2005 to about $17.4 million in fiscal year 2007, but declined to $12.1 million in fiscal year 2008 and to about $7.6 million in fiscal year 2009. One opportunity to leverage additional seizure techniques involves civil asset forfeiture authority, which allows federal authorities to seize property used to facilitate a crime without first having to convict the property owner of a crime. OI investigators indicated that lack of such authority makes it difficult to seize real estate involved in alien smuggling activity. In 2005, we recommended that the Attorney General, in collaboration with the Secretary of Homeland Security, consider submitting to Congress a legislative proposal, with appropriate justification, for amending the civil forfeiture authority for alien smuggling. Justice prepared such a proposal and it was incorporated into several larger bills addressing immigration enforcement or reform since 2005, but none of these bills had been enacted into law as of July 2010. According to Justice officials, the current administration has not yet taken a position on civil asset forfeiture authority for alien smuggling cases. We continue to believe it is important for Justice to seek the civil asset forfeiture authority it has identified as necessary to seize property used to facilitate alien smuggling. Thus, in our May 2010 report, we recommended that the Attorney General assess whether amending the civil asset forfeiture authority remains necessary, and if so, develop and submit to Congress a legislative proposal. Justice concurred with this recommendation. A second opportunity involves assessing the financial investigative techniques used by an Arizona Attorney General task force. The task force seized millions of dollars and disrupted alien smuggling operations by following cash transactions flowing through money transmitters that serve as the primary method of payment to those individuals responsible for smuggling aliens. By analyzing money transmitter transaction data, task force investigators identified suspected alien smugglers and those money transmitter businesses that were complicit in laundering alien smuggling proceeds. ICE officials stated that a fuller examination of Arizona’s financial investigative techniques and their potential to be used at the federal level would be useful. An overall assessment of whether and how these techniques may be applied in the context of disrupting alien smuggling could help ensure that ICE is not missing opportunities to take additional actions and leverage resources to support the common goal of countering alien smuggling. In our May 2010 report, we recommended that ICE conduct an assessment of the Arizona Attorney General’s financial investigations strategy to identify any promising investigative techniques for federal use. ICE concurred with our recommendation and stated that the week of April 12, 2010, ICE participated in the inaugural meeting of the Southwest Border Anti-Money Laundering Alliance, a body consisting of federal, state, and local law enforcement agencies along the southwest border. The main purpose of the meeting was to synchronize enforcement priorities and investigative techniques. However, while these are positive steps toward combating money laundering along the southwest border, it is not clear to what extent these actions will result in ICE evaluating the use of the Arizona Attorney General’s financial investigative techniques. OI and CBP have not fully evaluated progress toward achieving alien smuggling-related program objectives. Federal standards for internal control call for agencies to establish performance measures and indicators in order to evaluate the effectiveness of their efforts. One of the major objectives of OI’s alien smuggling investigations is to seize smugglers’ assets, but OI does not have performance measures for asset seizures related to alien smuggling cases. Tracking the use of asset seizures in alien smuggling investigations as a performance measure could help OI monitor its progress toward its goal of denying smuggling organizations the profit from criminal acts. Thus, in our May 2010 report, we recommended that ICE develop performance measures for asset seizures related to alien smuggling investigations. ICE concurred with the recommendation and stated that ICE is in the process of assessing all of its performance measures and creating a performance plan. In addition, ICE operates the Mexican Interior Repatriation Program (MIRP), which removes aliens apprehended during the hot and dangerous summer months to the interior of Mexico to deter them from reentering the United States and to reduce loss of life. However, ICE does not know the effectiveness of MIRP at disrupting alien smuggling operations or saving lives because ICE lacks performance measures for the program. Thus, in our May 2010 report, we recommended that ICE develop performance measures for MIRP. ICE did not agree with this recommendation because it believed that performance measures for this program would not be appropriate. According to ICE, any attempt to implement performance measures for MIRP to emphasize the number of Mexican nationals returned or the cost-effectiveness of the program would shift its focus away from the program’s original lifesaving intent and diminish and possibly endanger cooperation with the government of Mexico. However, we believe that performance measures would be consistent with the Memorandum of Understanding (MOU) signed by the United States and Mexico related to MIRP which calls for evaluation by appropriate officials. Thus, we believe that measuring MIRP’s program performance would be consistent with the MOU’s intent. CBP operates several programs that address alien smuggling, such as the Operation Against Smugglers Initiative on Safety and Security program (OASISS) in which suspected alien smugglers apprehended in the United States are prosecuted by Mexican authorities. In addition, CBP’s Operation Streamline prosecutes aliens for illegally entering the United States in order to deter them from reentering the United States. Lack of accurate and consistent performance data has limited CBP’s ability to evaluate its alien smuggling-related programs. CBP is in preliminary discussions to establish systematic program evaluations, but has not established a plan, with time frames, for their completion. Standard practices in project management for defining, designing, and executing programs include developing a program plan to establish an order for executing specific projects needed to obtain defined results within a specified time frame. Developing a plan with time frames could help CBP ensure that the necessary mechanisms are put in place so that it can conduct the desired program evaluations. Therefore, in our May 2010 report, we recommended that the Commissioner of CBP establish a plan, including performance measures, with time frames, for evaluating CBP’s alien smuggling-related enforcement programs. CBP concurred with our recommendation and stated that it is developing a plan that will include program mission statements, goals, objectives, and performance measures. CBP stated that it also has begun gathering data and holding workshops on developing performance measures for some of it programs. However it is not clear to what extent these actions will include time frames for evaluating CBP’s enforcement efforts. This concludes my prepared testimony. I would be pleased to respond to any questions that members of the subcommittee may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or stanar@gao.gov. In addition, contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Assistant Director Michael P. Dino, Ben Atwater, Bintou Njie, and Katherine Davis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses federal efforts to address alien smuggling along the southwest border. Alien smuggling along the southwest border is an increasing threat to the security of the United States and Mexico as well as to the safety of both law enforcement and smuggled aliens. One reason for this increased threat is the involvement of drug trafficking organizations in alien smuggling. According to the National Drug Intelligence Center's (NDIC) 2008 National Drug Threat Assessment, the southwest border region is the principal entry point for smuggled aliens from Mexico, Central America, and South America. Aliens from countries of special interest to the United States such as Afghanistan, Iran, Iraq, and Pakistan (known as special-interest aliens) also illegally enter the United States through the region. According to the NDIC assessment, Mexican drug trafficking organizations have become increasingly involved in alien smuggling. These organizations collect fees from alien smuggling organizations for the use of specific smuggling routes, and available reporting indicates that some Mexican drug trafficking organizations specialize in smuggling special-interest aliens into the United States. As a result, these organizations now have alien smuggling as an additional source of funding to counter U.S. and Mexican government law enforcement efforts against them. Violence associated with alien smuggling has also increased in recent years, particularly in Arizona. According to the NDIC assessment, expanding border security initiatives and additional U.S. Border Patrol resources are likely obstructing regularly used smuggling routes and fueling this increase in violence, particularly violence directed at law enforcement officers. Alien smugglers and guides are more likely than in past years to use violence against U.S. law enforcement officers in order to smuggle groups of aliens across the southwest border. In July 2009, a border patrol agent was killed while patrolling the border by aliens illegally crossing the border, the first shooting death of an agent in more than 10 years. Conflicts are also emerging among rival alien smuggling organizations. Assaults, kidnappings, and hostage situations attributed to this conflict are increasing, particularly in Tucson and Phoenix, Arizona. Communities across the country are at risk since among those individuals illegally crossing the border are criminal aliens and gang members who pose public safety concerns for communities throughout the country. Within the Department of Homeland Security (DHS), the Immigration and Customs Enforcement's Office of Investigations (OI) is responsible for investigating alien smuggling. In addition, DHS's Customs and Border Protection (CBP) and ICE's Office of Detention and Removal Operations (DRO) have alien smuggling-related programs. This testimony is based on a May 2010 report we are releasing publicly today on alien smuggling along the southwest border. As requested, like the report, this testimony will discuss the following key issues: (1) the amount of investigative effort OI has devoted to alien smuggling along the southwest border since fiscal year 2005 and an opportunity for ICE to use its investigative resources more effectively; (2) DHS progress in seizing assets related to alien smuggling since fiscal year 2005 and financial investigative techniques that could be applied along the southwest border to target and seize the monetary assets of smuggling organizations; and (3) the extent to which ICE/OI and CBP measure progress toward achieving alien smuggling-related program objectives. Our May 2010 report also provides a discussion of the extent to which ICE/OI and CBP have program objectives related to alien smuggling. We found the following: (1) OI work years devoted to investigating alien smuggling along the southwest border increased from about 190 work years in fiscal year 2005 to about 197 work years in fiscal year 2009, an overall increase of 4 percent, with hundreds of arrests, indictments, and convictions resulting. The overall number of work years decreased from about 190 work years in fiscal year 2005 to 174 in fiscal year 2008, but increased 23 work years from fiscal years 2008 to 2009 primarily due to an increase in one office. The percentage of time OI investigators spend on alien smuggling investigations, versus other investigative areas, such as drugs, has remained steady during this time period at 16-17 percent. (2) The value of OI alien smuggling asset seizures has decreased since fiscal year 2005, and two promising opportunities exist that could be applied to target and seize the monetary assets of smuggling organizations. According to OI data, the value of alien smuggling seizures nationwide increased from about $11.2 million in fiscal year 2005 to about $17.4 million in fiscal year 2007, but declined to $12.1 million in fiscal year 2008 and to about $7.6 million in fiscal year 2009. (3) OI and CBP have not fully evaluated progress toward achieving alien smuggling-related program objectives. Federal standards for internal control call for agencies to establish performance measures and indicators in order to evaluate the effectiveness of their efforts. One of the major objectives of OI's alien smuggling investigations is to seize smugglers' assets, but OI does not have performance measures for asset seizures related to alien smuggling cases. Tracking the use of asset seizures in alien smuggling investigations as a performance measure could help OI monitor its progress toward its goal of denying smuggling organizations the profit from criminal acts. Thus, in our May 2010 report, we recommended that ICE develop performance measures for asset seizures related to alien smuggling investigations. ICE concurred with the recommendation and stated that ICE is in the process of assessing all of its performance measures and creating a performance plan.
Air ambulances are an integral part of U.S. emergency medical systems, primarily transporting patients between hospitals, but also providing transport from accident scenes or for organs, medical supplies, and specialty medical teams. Air ambulances may be helicopters or fixed-wing aircraft. Helicopter air ambulances provide on-scene responses and much of the shorter-distance hospital-to-hospital transport, while fixed-wing aircraft are used mainly for longer facility-to-facility transport. (See fig. 1.) Helicopter air ambulances make up about 74 percent of the air ambulance fleet and, unlike fixed-wing aircraft, do not always operate under the direction of air traffic controllers. They also often operate in challenging conditions, flying, for example, at night during inclement weather and using makeshift landing zones at remote sites. My testimony today focuses on the safety of helicopter air ambulance operations. Air ambulance operations can take many different forms but are generally one of two business models—hospital-based or stand-alone. In a hospital- based model, a hospital typically provides the medical services and staff and contracts with an aviation services provider for pilots, mechanics, and aircraft. The aviation services provider also holds the FAA operating certificate. The hospital pays the operator for services supplied. In a stand-alone (independent or community-based) model, an independent operator sets up a base in a community and serves various facilities and localities. Typically, the operator holds the FAA operating certificate and either employs both the medical and flight crews or contracts with an aviation services provider for them. This stand-alone model carries more financial risk for the operator because revenues depend solely on payments for transporting patients. Some operators provide both hospital- based and stand-alone services and may have bases located over wide geographic areas. Regardless of the business model employed, most air ambulances—except government and military aircraft—must operate under rules specified in Part 135 of Title 14 of the Code of Federal Regulations when patients are on board and may operate under rules specified in Part 91 when patients are not present. As a result, different legs of air ambulance missions may be flown under different rules. However, some operators fly under part 135 regardless of whether patients are on board the aircraft. (See fig. 2.) Flight rules under Parts 91 and 135 differ in two key areas—(1) minimum requirements for weather and visibility and (2) rest requirements for pilots. The Part 135 requirements are more stringent. According to industry experts and observers, the air ambulance industry has grown, but data limitations make it difficult to determine by how much. Data for several years on the number of aircraft and number of operating locations are available in a database maintained by the Calspan- University of Buffalo Research Center (CUBRC) in alliance with the Association of Air Medical Services (AAMS). For 2003, the first year for which data are available, AAMS members reported a total of 545 helicopters stationed at 472 bases (airports, hospitals, and helipads). By 2008, the number of helicopters listed in the database had grown to 840, an increase of 54 percent, and the number of bases had grown to 699, an increase of 48 percent (see fig. 3). While a database official said that the data partly reflect the use of a revised criterion that allowed for the inclusion of more helicopters and for improved reporting since the database was established, the increase also reflects actual growth. Data are less readily available on whether this increase number of aircraft translates into an increased number of operating hours. FAA does not collect flight-hour data from air ambulance operators. Unlike scheduled air carriers, which are required to report flight hours, air ambulance operators and other types of on-demand operators regulated under Part 135 are not required to report flight activity data to FAA or the Department of Transportation. Historically, FAA estimated the number of flight hours, using responses to its annual General Aviation and Air Taxi and Avionics (GAATAA) survey. These estimates may not be reliable, however, because the survey is based on a sample of aircraft owners and response rates have historically been low. According to the government and industry officials we interviewed and the literature we reviewed, most of the air ambulance industry’s growth has been in the stand-alone (independent) provider business model. Testimony from industry stakeholders recently submitted to NTSB further identifies the stand-alone provider business model as the current area of industry growth. The growth in the stand-alone provider business model has led to increased competition in some locales. According to the officials we interviewed and others who have studied the industry, the increase in the stand-alone provider business model is linked to the development, mandated in 1997, of a Medicare fee schedule for ambulance transports, which has increased the potential for profit making. This fee schedule was implemented gradually starting in 2002, and since January 2006, 100 percent of payments for air ambulance services have been made under the fee schedule. Because the fee schedule has created the potential for higher and more certain revenues, competition has increased in certain areas, according to many of our sources. Increased competition can lead to potentially unsafe practices, industry experts said. Although we were unable to determine how widespread these activities are, experts cited the potential for such practices, including helicopter shopping and call jumping. Helicopter shopping refers to calling a series of operators until an operator agrees to take a flight assignment, without telling the subsequently called operators why the previously called operators declined the flight. This practice can be unsafe if the operator that accepts the flight assignment is not aware of all of the facts surrounding the assignment. Call jumping occurs when an air ambulance operator responds to a scene without being dispatched to it or when multiple operators are summoned to an accident scene. This situation is potentially dangerous because the aircraft are all operating in the same uncontrolled airspace—often at night or in marginal weather conditions—increasing the risk of a midair collision or other accident. From 1998 through 2008, the air ambulance industry averaged 13 accidents per year, according to NTSB data. The annual number of air ambulance accidents increased from 8 in 1998 to a high of 19 in 2003. Since 2003, the number of accidents has slightly declined, fluctuating between 11 and 15 accidents per year. While the total number of air ambulance accidents peaked in 2003, the number of fatal accidents peaked in 2008, when 9 fatal accidents occurred (see fig. 4). Of 141 accidents that occurred from 1998 to 2008, 48 accidents resulted in the deaths of 128 people. From 1998 through 2007, the air ambulance industry averaged 10 fatalities per year. The number of overall fatalities increased sharply in 2008, however, to 29. Both the spike in the number of fatal accidents in 2008 and the overall number of accidents are a cause for concern. However, given the apparent growth in the industry, the increase in the number of accidents may not indicate that the industry has experienced, on the whole, the industry’s safety record has worsened. More specifically, without actual data on the number of hours flown, no accident rate can be accurately calculated. Because an accurate accident rate is important to a complete understanding of the industry’s safety, we recommended in 2007 that FAA collect data on flight activity, including flight hours. In response, FAA has surveyed all helicopter air ambulance operators to collect flight activity data. However, to date, FAA’s survey response rate is low, raising questions about whether this information can serve as an accurate measure or indicator of flight activity. In the absence of actual flight activity data, others have attempted to estimate flight hours and accident rates for the industry. For example, an Air Medical Physician Association (AMPA) study estimated annual flight hours for the air medical industry through an operator survey, determining that the overall air medical helicopter accident rate has dropped slightly in recent years to approximately 3 accidents per 100,000 flight hours. However, the study’s preliminary estimates for 2008 indicate that the fatal accident rate tripled over the 2007 rate, increasing from 0.54 fatal accidents per 100,000 flight hours in 2007 to 1.8 fatal accidents per 100,000 flight hours in 2008. Data on the causes and factors underlying air ambulance accidents indicate that while the majority of accidents are caused by pilot error, a number of risks, including nighttime operations, adverse weather conditions, and flights to remote sites, also contribute to accidents. NTSB data on helicopter accidents occurring from 1998 through 2008 show that pilot error was deemed the probable cause in more than 70 percent of air ambulance accidents, while factors related to flight environment (such as light, weather, and terrain) contributed to 54 percent of all accidents. Nighttime accidents for air ambulance helicopters were prevalent, and air ambulance accidents tended to be more severe when they occurred at night than during the day. Similarly, air ambulance accidents were often associated with adverse weather conditions (e.g., wind gust and fog). Finally, flying to remote sites may further expose the crew to other risks associated with unfamiliar topography and makeshift landing sites. In 2007, we reported that the air ambulance industry’s response to the higher number of accidents has taken a variety of forms, including research into accident causes and training. Since then, the industry has continued its focus on improving safety by, for example, initiating efforts to develop an industry risk profile and share weather information. In July 2008, for instance, AAMS convened a conference (summit) on safety to encourage open communication between the medical and aviation sectors of the industry. AAMS plans to issue a summary of the summit’s proceedings that will include recommended next steps. Table 1 highlights examples of recent industry initiatives. In 2007, we reported that FAA, the primary federal agency overseeing air ambulance operators, has issued guidance, expanded inspection resources, and collaborated with the industry to reduce the number of air ambulance accidents. Since then, FAA has taken additional steps to improve air ambulance safety including the following: Enhanced oversight to better reflect the unique nature of the industry. FAA has changed its oversight to reflect the varying sizes of operators. Specifically, large operators with 25 or more helicopters dedicated to air medical flights are now assigned to dedicated FAA Certificate Management Teams (CMT)—groups of inspectors that are assigned to one air ambulance operator. These CMTs range in size from 4 inspectors for Keystone Helicopter Corporation, which has a fleet of 38 helicopters, to 24 inspectors for Air Methods, which has a fleet of 322 helicopters. Additionally, CMTs use a data- and risk-based process to target inspections to areas that pose greater safety risk. For operators of all sizes, FAA has asked inspectors to consider using the Surveillance Priority Index tool, which can be used to identify an operator’s most pressing safety hazards. In addition, FAA is hiring more aviation safety inspectors with rotorcraft experience. Provided technical resources. FAA has revised its guidance for the use of night vision goggles (NVG) and established a cadre of NVG national resource inspectors. FAA has also developed technical standards for the manufacture of helicopter terrain awareness and warning systems for air medical helicopters. These standards articulate the minimum performance standards and documentation requirements that the technology must meet to obtain FAA approval. FAA also commissioned the development of an air ambulance weather tool, which provides weather assessments for the community. Launched accident mitigation program. Initiated in January 2009, this program provides guidance for inspectors of air ambulance operators, requiring them to ensure, among other things, that these operators have a process in place to facilitate safe operations, such as a risk assessment program. Revised minimum standards for weather and safe cruise altitudes: To enhance safety, FAA revised its minimal requirements for weather and safe cruise altitudes for helicopter air ambulances in November 2008. Specifically, FAA revised its specifications to require that if a patient is on board for a flight or flight segment and at least one of the flight segments is therefore subject to Part 135 rules, then all of the flight segments must be conducted within the revised weather minimums and above a minimum safe cruise altitude determined in preflight planning. Issued guidance on operational control: To help operators better assess risk, improve the flow of information before and during flights, and increase support for flight operations, FAA issued guidance to help air medical operators develop, implement, and integrate operations control centers and enhance operational control procedures. To date, FAA has opted not to use its rulemaking authority to require certain actions, relying instead on notices and guidance to encourage air ambulance operators to take certain actions. FAA guidance and notices are not mandatory for air ambulance operators and are not subject to enforcement. FAA officials told us that rulemaking is a time-consuming process that can take years to complete, hindering the agency’s ability to quickly respond to emerging issues. By issuing guidance rather than regulations, FAA has been able to quickly respond to concerns about air ambulance safety. However, we previously noted that FAA lacked information on the extent to which air ambulance operators were implementing the agency’s voluntary guidance and on the effect such guidance was having. Consequently, we recommended that FAA collect information on operators’ implementation of the voluntary guidance and evaluate the effectiveness of that guidance. In response, in January 2009, FAA directed safety inspectors to survey the air medical operators they oversee about their adoption of suggested practices, such as implementing risk assessment programs and developing operations control centers. According to the inspectors, most of the 74 operators surveyed said they had adopted these practices. Despite the actions taken by the industry and the federal government, 2008 was the deadliest year on record for the air ambulance industry. As a board member noted at the recent NTSB hearing on air ambulance safety, the recent accident record of the industry is unacceptable. Based on our body of work on aviation safety, including air ambulance safety; a review of the published literature; and interviews with government and industry officials, we have identified several potential strategies for improving air ambulance safety. Each of these strategies has merits and challenges, and we have not analyzed their benefits and costs. But, as the recent accident numbers show, additional efforts are warranted. Obtain complete and accurate data on air ambulance operations: As we reported in 2007, FAA lacks basic industry information, such as the number of flights and flight hours. In response to our prior recommendation that FAA collect flight activity data, FAA surveyed all helicopter air ambulance operators in 2008, but fewer than 40 percent responded, thereby raising questions about the reliability of the information collected. The low response rate also suggests that many operators will not provide this information unless they are required to do so. Until FAA obtains complete and reliable information from all air ambulance operators, it will be unable to gain a complete understanding of the industry and determine whether its efforts to improve industry safety are sufficient and accurately targeted. Increase use of safety technologies: We have previously reported that using appropriate technology and infrastructure can help improve aviation safety. For example, the development and installation of terrain awareness and warning systems on large passenger carriers has almost completely eliminated controlled flights into terrain, particularly for aircraft equipped with this system. When we studied the air ambulance industry in 2006 and 2007, the most frequently cited helicopter-appropriate technology was night vision goggles. Additional safety technology has been developed or is in development that will help aircraft avoid cables and enhance terrain awareness for pilots, among other things. However, testimony submitted by industry stakeholders at NTSB’s February 2009 hearing on air ambulance safety indicated that the implementation of such technology has been slow. NTSB previously recommended that FAA require terrain awareness and warning systems on air ambulances. Proposed legislation (H.R. 1201) would also require FAA to complete a study within one year of the date of enactment on the feasibility of requiring flight data and cockpit voice recorders on new and existing air ambulances. Sustain recent efforts to improve air ambulance safety: Our past aviation safety work and anecdotal information on air ambulance accident trends suggest that the industry and federal government must sustain recent efforts to improve air ambulance safety. In 1988, after the number of accidents increased in the mid-1980s, NTSB published a study that examined air ambulance safety issues. The study contained 19 safety recommendations to FAA and others. FAA took action, including implementing the NTSB recommendations, and the number of ambulance accidents declined in the years that immediately followed. However, as time passed, the number of accidents started to increase, peaking in 2003. This again triggered a flurry of government and industry actions. Similarly, FAA took steps to address runway incursions and overruns after the number and rate of incursions peaked in fiscal year 2001, but FAA’s efforts later waned, and the number and rate of incursions and overruns remained steady. Fully Address NTSB recommendations: In 2006, NTSB published a special report focusing on the air ambulance industry, which included four recommendations to FAA to improve air ambulance safety. Specifically, NTSB called for FAA to (1) require that all flights with medical personnel on board be conducted in accordance with Part 135 regulations, (2) develop and implement flight risk evaluation programs, (3) require formalized dispatch and flight-following procedures, and (4) require terrain awareness and warning systems on aircraft. As of January 2009, FAA had sufficiently addressed only the recommendation to require formalized dispatch and flight-following procedures, according to NTSB. However, NTSB’s February 2009 air ambulance hearing highlighted the status of the NTSB recommendations, and major industry associations have said they agree in principle with the recommendations, but would like to work with FAA and NTSB to adapt the recommendations to the industry’s circumstances and gain more flexibility. Proposed legislation (H.R. 1201) also would require most of the safety enhancements NTSB recommended. Adopt safety management systems within the air ambulance industry: Air operators rely on a number of protocols to help reduce the potential for poor or erroneous judgment, but evidence suggests that these protocols may be inconsistently implemented or followed in air ambulance operations. According to an FAA report on air ambulance accidents from 1998 through 2004, a lack of operational control (authority over initiating, conducting, and terminating a flight) and poor aeronautical decision making were significant factors contributing to these accidents. To combat such issues, FAA has been encouraging air ambulance operators to move toward adopting safety management systems, providing guidance, developing a generic flight risk assessment tool for operators, and requiring inspectors to promote the adoption of safety best practices. Clarify the role of states in overseeing air ambulance services: Air ambulance industry stakeholders disagree on the role that states should play in overseeing broader aspects of air medical operations. In particular, some industry stakeholders have advocated a greater role for states in regulating air ambulance services as part of their public health function. Other industry stakeholders, however, oppose increased state oversight, noting, for example, that the Airline Deregulation Act explicitly prohibits states from regulating the price, route, or service of an air carrier. This legislation generally limits oversight at the state or local levels to the medical care and equipment provided by air ambulance services, although the extent of this oversight varies by state. Proposed legislation (H.R. 978) would recognize and clarify the authority of the states to regulate intrastate air ambulance services in accordance with their authority over public health. Determine the appropriate use of air ambulance services: According to a May 2007 article by two physicians, multiple organizations are concerned that air ambulance services are overused and misused. The study further notes concerns that decisions about where to transport a patient may be influenced by nonmedical reasons, such as insurance coverage or agreements with hospitals. Another industry expert has posited that excessive use of air ambulances may be unsafe and not beneficial for most patients, citing recent studies that conclude few air transport patients benefited significantly over patients transported by ground and noting the recent increase in the number of air medical accidents. Other studies, however, have disagreed with this position, citing reductions in mortality achieved by using air ambulances to quickly transport critically injured patients. We provided a draft copy of this testimony to FAA for review and comment. FAA provided technical clarifications, which we incorporated as appropriate. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to questions from you or other Members of the Subcommittee. For further information on this statement, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this testimony were Nikki Clowers, Assistant Director; Vashun Cole, Elizabeth Eisenstadt, Brooke Leary, and Pamela Vines. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Air ambulance transport is widely regarded as improving the chances of survival for trauma victims and other critical patients. However, recent increases in the number of air ambulance accidents have led to greater industry scrutiny by government agencies, the public, the media, and the industry itself. The National Transportation Safety Board (NTSB) and others have called on the Federal Aviation Administration (FAA), which provides safety oversight, to issue more stringent safety requirements for the industry. This testimony discusses (1) recent trends in the air ambulance industry with regard to its size, composition, and safety record; (2) recent industry and government efforts to improve air ambulance safety; and (3) potential strategies for improving air ambulance safety. This testimony is based primarily on GAO's February 2007 study on air ambulance safety (GAO-07-353). To update and supplement this 2007 report, GAO analyzed the latest safety information from NTSB and FAA, reviewed published literature on the state of the air ambulance industry, and interviewed FAA officials and industry representatives. GAO provided a copy of the draft testimony statement to FAA. FAA provided technical comments, which GAO incorporated as appropriate. The air ambulance industry has increased in size, and concerns about its safety have grown in recent years. Available data suggest that the industry grew, most notably in the number of stand alone (independent or community-based) as opposed to hospital-based operators, and competition increased among operators, from 2003 through 2008. During this period, the number of air ambulance accidents remained at historical levels, fluctuating between 11 and 15 accidents per year, and in 2008, the number of fatal accidents peaked at 9. This accident record is cause for concern. However, a lack of reliable data on flight hours precludes calculation of the industry accident rate--a critical piece of information in determining whether the increased number of accidents reflects industry growth or a declining safety record. The air ambulance industry and FAA have acted to address accident trends and causes. For example, FAA enhanced its oversight to reflect the varying sizes of operators, provided technical resources to the industry, launched an accident mitigation program, and revised the minimum standards for weather and safe cruising altitudes that apply to air ambulance operations. Despite the actions to improve air ambulance safety, 2008 was the deadliest year on record for the industry. Through its work on aviation safety, including air ambulance safety; review of the published literature; and interviews with government and industry officials, GAO has identified several potential strategies for improving air ambulance safety, including the following: (1) Obtain complete and accurate data on air ambulance operations. (2) Increase the use of safety technologies. (3) Sustain recent efforts to improve air ambulance safety. (4) Fully address NTSB's recommendations. (5) Adopt safety management systems within the air ambulance industry. (6) Clarify the role of states in overseeing air medical services. (7) Determine the appropriate use of air ambulance services.
UOCAVA, as amended, generally protects the right to register and vote by absentee ballot in federal elections for military personnel and U.S. citizens who live overseas. The act also requires that states adopt a number of processes, such as permitting absent servicemembers and overseas voters to use the Federal Write-in Absentee Ballot in general elections for federal office, subject to certain exemptions. In 2002, Congress passed and the President signed the Help America Vote Act of 2002, which amended UOCAVA and required states to be more transparent about sending and rejecting UOCAVA ballots by, for example, requiring states to provide voters the reasons for rejecting a registration application or absentee ballot request and to report the number of ballots sent to servicemembers and overseas voters and the number returned by those voters and cast in the election. Most recently, Congress passed and the President signed the Military and Overseas Voter Empowerment Act in 2009, which amended UOCAVA to require, among other things, that states transmit a validly requested absentee ballot within a certain time frame to absent uniformed services voters or overseas voters and that DOD’s FVAP expand its efforts to raise voter awareness regarding voter registration and absentee ballot procedures and resources. The U.S. election system for servicemembers and U.S. citizens living overseas comprises a complex network of communication with disparate and geographically disperse populations extending over seven continents, 55 states and territories, and thousands of voting jurisdictions, and relies on the coordinated efforts of federal, state, and local governments to carry out their roles and responsibilities. The Secretary of Defense is the presidential designee with the primary responsibility for the federal functions under UOCAVA, generally including educating and assisting voters covered by UOCAVA and for working with states to facilitate absentee voting. The Secretary implements UOCAVA and related legislation through DOD’s FVAP, which is guided by DOD Instruction 1000.04 and is overseen by the Defense Human Resources Activity within the Office of the Under Secretary of Defense for Personnel and Readiness. FVAP officials stated that the program works to ensure that servicemembers, their eligible family members, and overseas citizens are aware of their right to vote and have the resources to vote successfully from anywhere in the world. To carry out this purpose, FVAP coordinates with DOD components and the Department of State to provide information to, respectively, military personnel who vote absentee and to U.S. citizens who reside abroad. Voter education and assistance efforts for military personnel are largely implemented by the military services through voting assistance officers, who are assigned this role in addition to their primary duties. As of December 2015, DOD officials estimated that the military services collectively had approximately 4,500 unit voting assistance officers. The voting assistance officers distribute and help UOCAVA voters to complete FVAP forms, such as the Federal Post Card Application and the Federal Write-in Absentee Ballot, as well as any state or local forms that voters may use to register and request a ballot. See appendixes IV and V for copies of these forms, for which FVAP is generally responsible in accordance with statutory and regulatory requirements. Similarly, the Department of State is responsible for designating a voting action officer to oversee the implementation of its voting assistance program, and designates voting assistance officers at each of its embassies and consulates to provide voting assistance for U.S. citizens living abroad. An additional FVAP election cycle responsibility is to survey UOCAVA voters and other stakeholders, such as DOD voting assistance officers and local election officials, and report on FVAP’s voter assistance. Specifically, after every presidential election, DOD is required by statute to transmit a report to the President and Congress on the effectiveness of assistance, including a statistical analysis of uniformed services voter participation, a separate statistical analysis of overseas nonmilitary participation, and a description of state-federal cooperation. In addition, DOD is required by statute to transmit a report each year to the President and relevant congressional committees to include an assessment of the effectiveness of voting assistance activities, voter registration and participation by servicemembers and other overseas voters, and, in years following federal election years, information related to absentee ballots. Figure 1 provides a general timeline of some of the voting activities that DOD undertakes during federal election cycles. FVAP conducts surveys of UOCAVA voters and state and local election officials, in coordination with the Defense Manpower Data Center and the Election Assistance Commission, respectively, to obtain some of this and related information. For example, FVAP works with the Defense Manpower Data Center to survey servicemembers about their UOCAVA voting experience. The surveys include questions to determine how voters used DOD and FVAP voting assistance resources and how voters requested, received, and returned their absentee ballots. FVAP also works with the Defense Manpower Data Center to survey DOD’s voting assistance officers about the process of delivering voting services to servicemembers. In addition, FVAP collaborates with the Election Assistance Commission. Their partnership began in 2013 in preparation for the 2014 election, in which they conducted a joint survey of state-level election offices about the state’s interaction with UOCAVA voters. The local election office survey gathers data about the number of UOCAVA voters that requested and submitted ballots, the methods by which voters requested and submitted ballots, and the rates of and causes for rejected ballots, among other things. The U.S. election system is decentralized and relies on complex interactions between people, processes, mail, and technology. Voters, local election jurisdictions, states and territories, and the federal government all play roles in the election process. FVAP’s role is to share information about this process with its customers—UOCAVA voters—and stakeholders that have roles in other parts of the process. The elections process is primarily the responsibility of the individual states and territories and their local election jurisdictions. Thus, the registration and voting process is a multistep process that varies by state and territory. As we have reported previously, states and territories have considerable discretion in how they organize the elections process, which is reflected in the diversity of procedures and deadlines the states and jurisdictions establish for voter registration and absentee voting. Further, states and jurisdictions employ a variety of voting methods, including mailed paper ballots, emails, and faxes. In order to vote, UOCAVA voters register, obtain and complete an absentee ballot in accordance with state requirements (such as providing a signature), and return the voted ballot to the local election office in time to meet state election deadlines, which also vary. In addition to the time it takes a voter to complete these steps, local election offices must process these materials. Studies have shown that most UOCAVA voters still rely on the mail to request, receive, and return their ballots. Figure 2 depicts the steps of the absentee voting process for military and overseas voters. Following is a description of each step of the multistep absentee voting process in greater detail. 1. Voter Prepares and Submits Voter Registration Application and/or The voter prepares and submits a voter registration application and/or absentee ballot request. According to FVAP’s voting assistance guide, all states accept voter registration and ballot request forms by mail. UOCAVA voters can choose whether to use the Federal Post Card Application, the registration and absentee ballot request form that FVAP developed and maintains, or their state’s voter registration application and state ballot request form. Federal law requires all states to accept the Federal Post Card Application, which is both a voter registration and ballot request form. Some states offer online voter registration and ballot request options directly from their websites. Ballot request options include, depending on the state, for voters to receive their ballot by mail, email, fax, or other electronic method, if applicable. For some ballot request forms, including the Federal Post Card Application, the voter can select the method by which they would like to receive their ballot from the local election office, including mail, email, or fax. 2. Local Election Office Receives and Processes Voter Registration Application and/or Absentee Ballot Request and Sends Blank Ballot to Voter The local election office receives and processes the voter registration application and/or absentee ballot request form or Federal Post Card Application and sends the absentee ballot to the voter. Local election offices provide a confirmation notice to voters that their voter registration and absentee ballot requests have been approved, or a letter indicating the reason the request was denied. The Military and Overseas Voter Empowerment Act amended UOCAVA to require, among other things, that states establish procedures to offer voters the option to receive a ballot electronically. Local election offices transmit absentee ballots to voters via email, fax, online download, or by mail, depending on how the voter requested the ballot. 3. Voter Receives, Marks, and Returns the Absentee Ballot The voter receives, marks, and returns the absentee ballot. According to FVAP’s voting assistance guide, all states accept the completed ballot by mail and some states accept completed ballots by email, fax, or an online system. If the voter does not receive the ballot from the local election office or believes the ballot may arrive too late to return the ballot to meet the state deadline, the voter can submit a Federal Write-in Absentee Ballot, a backup ballot that FVAP developed and maintains, in order to meet state deadlines. FVAP makes the Federal Write-in Absentee Ballot available to voters on its website and through unit voting assistance officers, who distribute hard copies of the ballot. Federal law requires all states to accept the Federal Write-in Absentee Ballot as a backup ballot. 4. Local Election Office Receives and Processes the Absentee Ballot The local election office receives and processes the absentee ballot or Federal Write-in Absentee Ballot. The local election office must determine whether the ballot was received by the deadline and whether the ballot is valid for counting, depending on requirements such as, according to FVAP, the ballot arriving at the local election office on time and the signature on the ballot matching the signature on the voter’s registration form, among other requirements. Some states provide a confirmation to voters that their ballot has been received. DOD has taken various steps to identify challenges and needed improvements to its overseas voting assistance efforts. Specifically, DOD commissioned two studies on FVAP—one issued in 2014 and one in 2015—and administers surveys of absentee voters and voting assistance officers after every federal election. Through these efforts, DOD has identified long-standing issues with the limited awareness of FVAP resources and unpredictable postal delivery of UOCAVA ballots that continue to pose challenges to the program’s effectiveness. DOD has identified some actions and has taken some steps to address these challenges such as simplifying and standardizing instructions in FVAP’s voting assistance guide to better support UOCAVA voters and their advocates, analyzing quantitative and qualitative research on barriers to voting success, and increasing the usage of online marketing tools to improve outreach. However, these long-standing challenges persist in part because DOD has not established time frames for completing actions intended to address them. Since we last reported on FVAP in 2010, DOD has taken various steps to identify challenges and needed improvements to its military and overseas voting assistance efforts. For example, since the 2012 presidential election, DOD has commissioned two studies that identified challenges faced by absentee voters—including active-duty military members, family members of military personnel, overseas federal government employees, and overseas civilian voters—as well as challenges faced by FVAP within its own organization. Specifically, in 2013, DOD commissioned the RAND Corporation to conduct a study of FVAP’s strategic focus in order to assist the program in aligning its strategy and operations to better fulfill its mission and serve its stakeholders. According to FVAP officials, the customers of the program are absentee voters, and the program stakeholders include the federal agencies with which it collaborates, state and local election officials that have a role in the absentee voting process, and the voting action officers and military service voting assistance officers. The RAND Corporation also identified additional stakeholders, such as congressional staff on relevant committees, organizations that represent state and local election officials such as the National Association of State Election Directors, the National Association of Secretaries of State, the Election Center, as well as nongovernment organizations that represent overseas citizens such as the Overseas Vote Foundation. The study, issued in 2014, found in part that stakeholders did not understand FVAP’s purpose and role in the voting process due to: (1) an ambiguous mission that was not commonly understood by program staff, (2) fragmentation among program activities such as assistance and institutional support, (3) inadequate capacity in data collection and analysis, and (4) the inefficient use of staff to carry out program priorities. Based on the results, DOD began taking steps to address these issues, including reexamining FVAP’s mission and purpose and implementing changes in how it works with external stakeholders. In July 2015, DOD released the results of another commissioned study on FVAP in which it identified barriers to UOCAVA voting success and social and behavioral factors that influence voters. This study found that building relationships with positions outside of DOD that communicate with overseas voters—such as human resources managers, study abroad leaders, and nongovernmental organizations—could help share information with potential absentee voters. In addition, this study found that DOD needed to share accurate information about the absentee voting process to contradict the myth that absentee ballots are not counted except in rare circumstances. In addition to these studies, FVAP is required to conduct surveys of voters and voting assistance officers and to provide an assessment of activities undertaken and the effectiveness of assistance. Using the results of the 2014 survey—the most recent results available—DOD compared UOCAVA registration and voter participation rates for the 2014 general election with the results of prior post-election surveys to better understand dissemination of absentee voting information among the UOCAVA voter population. These studies and post-election surveys have helped DOD to identify new and emerging issues with its voting assistance program. In particular, two long-standing issues continue to pose challenges to the program’s effectiveness. In both of the 2014 and 2015 studies that it issued and in all of the post-election surveys that it administered between 2008 and 2014, DOD repeatedly identified that challenges persist with the limited awareness of FVAP’s resources and the unpredictable mail delivery of UOCAVA ballots. In addition, both of these issues were discussed in all four reports that we issued on this subject between 2001 and 2010. Further details about these two challenges are provided below. According to DOD, efforts to vote by servicemembers and U.S. citizens living overseas are most successful when voters are aware of the tools and resources that are available through FVAP. However, as previously noted, the results from recent studies and post-election surveys indicate that there is limited awareness of FVAP’s resources among military and overseas voters. For example, FVAP’s 2014 post-election survey indicated that, of the active duty servicemembers who responded, 61 percent did not seek voting assistance from FVAP and were not aware of FVAP’s assistance, while 60 percent did not seek voting assistance from unit voting assistance officers and were not aware of voting assistance officers. Under DOD guidance implementing UOCAVA, FVAP is responsible for establishing and maintaining a program to assist all eligible voters. This involves various activities including, but not limited to, collecting and reporting on survey data, prescribing forms for UOCAVA voters to use when registering to vote, and coordinating with the states. Furthermore, DOD Instruction 1000.04 requires FVAP to establish a means to inform absent uniformed servicemembers of absentee voting information and resources 90, 60, and 30 days before each federal election. The types of information military and overseas absentee voters need to be aware of in order to prepare to vote include, but are not limited to: election dates and deadlines for submitting an absentee ballot; FVAP forms and online assistance for completing those forms; DOD voting assistance resources available such as the installation or the location where the voter is registered/eligible to vote; the way to request a blank ballot; unit voting assistance officer; and information on FVAP’s resources, such as FVAP’s website; call center; and email address. Consistent with its designated responsibilities, FVAP has developed education and outreach materials such as brochures, wallet cards, a voting assistance guide, and a website to provide information to citizens about the absentee voting process, including state-specific information and service voting assistance programs. In addition, to increase awareness of FVAP as a resource, DOD has hired contractors to support FVAP’s voter assistance campaigns through communication and outreach. For example, FVAP’s contractors assist FVAP in the development of promotional materials such as videos, posters, and social media messages. In our 2007 report, we noted that the U.S. election system is highly decentralized and that states and territories have considerable discretion in determining how they organize the election process, which is reflected in the diversity of procedures and deadlines that states and jurisdictions establish for voter registration and absentee voting. In its 2008 post- election report to Congress, FVAP similarly highlighted the variation in and complexity of absentee voting procedures. For the 2014 general election, the Election Assistance Commission report summarized 8,200 survey responses from local election offices across the United States and territories about ballots that those offices transmitted to and were returned from UOCAVA voters. To help voters and voting assistance officers to be aware of these variations, FVAP maintains a website with links to state election office websites, and regularly updates its Voting Assistance Guide, which includes state-specific instructions and timelines for completing the required voting forms. Despite such resources, the results of the two DOD-commissioned studies and FVAP’s post-election surveys administered between 2008 and 2014 showed that awareness of FVAP materials to assist voters needed improvement. For example, by conducting focus groups and interviews with servicemembers and U.S. citizens living overseas, one of the reports commissioned by DOD found that voters were uncertain about registration deadlines, which, as previously noted, vary by jurisdiction. Further, while FVAP offers UOCAVA voters online assistance with completing, among other things, the Federal Post Card Application and the Federal Write-in Absentee Ballot online, a DOD–commissioned study found specifically that some military voters interviewed as part of its study were not aware of the Federal Post Card Application or that it could be used to both register and request a ballot. As a result, servicemembers may be taking additional time to separately request a ballot not realizing that they can do so using the same form. Further, that study cited one of the most significant improvements in UOCAVA voting as the states’ transmission of blank ballots to UOCAVA voters online through email or a state portal as a result of the Military and Overseas Voter Empowerment Act. Yet as of 2014, many overseas absentee voters interviewed as part of the research study were unaware that they could receive their ballots online. DOD’s commissioned reports also found that state and local election offices are not fully aware of FVAP’s role as a resource that can assist with implementation of UOCAVA requirements. To raise awareness about FVAP’s availability as an election resource to state and local election officials, FVAP assists election officials by providing online training and guidance, sending email alerts, funding research grants, participating in conferences, conducting other local outreach, and making direct (person- to-person) contact. Upon request, FVAP can also help state election officials look up servicemembers’ military postal addresses. However, some local election officials and other stakeholders that we spoke with stated that the information that FVAP provides to states may not filter down to the smaller localities. For example, one of the local election officials we spoke with was not aware that FVAP could look up active- duty servicemembers’ addresses for election officials. FVAP officials stated that they focus coordination at the state level because, in their experience, some state officials prefer to filter the information that FVAP provides about UOCAVA voting to their localities. To address state and local awareness of FVAP resources, the Defense Human Resources Activity, on behalf of FVAP, entered into an agreement with the Council of State Governments in 2013. Among other things, the agreement required the creation and support of two advisory groups, one to identify and promote best practices for absentee voting laws, regulations, and policy for military and overseas voters and the second to standardize data collection and encourage state and local election jurisdictions to test and implement tools to report uniform data from voter registration and election administration systems. The council submits activity reports from this partnership to FVAP on a quarterly, semi-annual, or annual basis depending on the type of report or deliverable, as specified in the agreement. In December 2015, the Council of State Governments released a series of recommendations to the states, based on the results of this collaboration. The recommendations were related to voter communication, the Federal Post Card Application, online voter registration, and engagement with the military community. The second category of challenges identified by DOD and in our prior work relates to the unpredictable postal delivery of absentee ballots to and from UOCAVA voters. In addition to meeting any documentation requirements, such as providing a signature or appropriately completing all required sections, absentee ballots must be returned to the appropriate election office by the specified deadline in order to have the votes counted. However, the mail system, for a variety of reasons, can be unpredictable for military and overseas voters. Specifically, the time it takes for UOCAVA voters to receive an absentee ballot depends on their location and may involve a complex and lengthy transit via different transportation modes and speeds. For example, officials from the Military Postal Service Agency noted that military mail transits between U.S. Postal Service and Military Postal Service networks, and that voted absentee ballots are shipped from overseas via air carriers. Military mail does not enter a foreign country’s mail processing network and is considered domestic mail, according to officials. However, overseas citizens use foreign postal systems and DOD reported that civilians who use those systems expressed distrust for them in certain countries. In our 2001 report, we noted that overseas voters who do not have access to the military postal system may have faced problems such as longer transit times and unreliable mail service. Further, we noted that some ballots that originated from overseas may not have been postmarked until they arrived in the United States, raising the potential for local jurisdictions in states with an extended deadline to disqualify them because they lacked an overseas postmark or bore a postmark dated after Election Day. All states provide the option to request or print ballots online, but the availability of resources to active duty military can be limited based on their location. For example, a service voting assistance officer stated that U.S. Navy ships have a limited number of computer terminals, bandwidth, and printers, which may prevent servicemembers from accessing or printing their ballots. Furthermore, mail delays experienced by servicemembers may also be the result of deployment changes including the timing of their arrival at a new duty station or departure from a current duty station. As such, guidance on the different factors that may affect the timeliness of an absentee ballots’ transit is critical to help ensure that those voters are optimally positioned to meet U.S. voting deadlines. In the 2014 general election, postal mail was the primary mode of ballot transmission for many UOCAVA voters. For example, FVAP’s 2014 post- election survey report to Congress stated that 61 percent of UOCAVA ballots were transmitted to potential voters via postal mail, and 75 percent of the UOCAVA ballots counted were received via postal mail. For active duty servicemembers, a subset of UOCAVA voters, 75 percent of those who requested a blank ballot obtained it from their local election office through mail delivery from the U.S. Postal Service and the Military Postal Service—which handles military mail—and 84 percent of those members returned their voted ballot through the mail. FVAP is required to coordinate with the Military Postal Service Agency (MPSA), an extension of the U.S. Postal Service components which monitors and oversees Military Postal Service functions, to implement measures to ensure voting materials are moved expeditiously, to the maximum extent practicable, by military postal authorities. To help UOCAVA voters meet the deadline to successfully cast their vote in a U.S. election, U.S. Postal Service, in conjunction with MPSA, provides servicemembers and overseas voters with recommended absentee ballot mailing days and the corresponding dates by which voters should mail their ballot prior to the election date. The MPSA recommends ballot mailing dates for each Army or fleet post office location. According to MPSA officials, the mailing dates are determined by consulting with the major commands and combatant commands and reviewing transportation routes and frequency. These dates provide an estimate of the number of days required for a ballot to reach local election offices through Military Postal Service and U.S. Postal Service networks. In January 2016, the MPSA issued the recommended mail times for the 2016 election, and those times ranged from 7 to 30 days for ballots to be transmitted to local election offices in the United States. Some of the locations with longer transit recommendations include Ethiopia (30 days), Egypt (20 days), and Afghanistan (20 days). In addition, MPSA recommends that all deployed Navy ships in the Atlantic and Pacific fleets mail their ballots at least 25 days before the 2016 election but no later than October 10, 2016. These recommended mailing times can be helpful to voters; however, based on our analysis, they also indicate that some military and overseas voters who rely solely on mail delivery may not have enough time to both request a blank ballot and cast their vote. For example, we found that voters on deployed Navy ships in the Atlantic and Pacific fleets may experience a 25-day transit time. Therefore, to allow for transit of (1) the Federal Post Card Application that the voter mails to his or her local election office, (2) the blank ballot the local election office sends to the voter, and (3) the voted ballot that the voter mails back to the local election office, a voter would have to multiply the recommended transit time by three to account for each of those transits. Further, if a voter who is deployed in the Atlantic and Pacific fleets relies solely on mail and the local election office sends the blank ballot not later than 45 days prior to the election as required by statute, it is unlikely that the voted ballot will be returned on time because those two parts of the process may take up to 50 days, according to MPSA’s recommended mailing times. In addition to recommending mailing dates, MPSA issues a Strategic Postal Voting Action Plan to assist military voters during each general election cycle. This action plan specifies, among other things, deadlines for ballots to be collected and postmarked at overseas military postal outposts. The plan further notes that overseas military postal activities with intermittent transportation networks or other limiting factors may establish alternative mailing deadlines to help ensure that absentee ballots reach election offices by the election date. States reported to the Election Assistance Commission that they had rejected approximately 8,500 of the approximately 146,000 UOCAVA ballots received for the 2014 general election, and that the reason ballots were most commonly rejected is because they were received after a state’s ballot receipt deadline. Military personnel in overseas military postal locations can return their absentee ballots via Express Mail using an Express Mail Label 11-DOD that can be used only for absentee ballots originating from overseas military postal locations. Local election officials we spoke with stated that they send their ballots to the designated military post office via First-Class Mail or Standard Mail because of the expense of sending a ballot via Express Mail. Election officials have the option to use a special identification tag for official ballot First-Class or Standard Mail addressed for domestic or international delivery; however, this is not required. We previously reported that the U.S. Postal Service changed its delivery standards for some types of mail in 2012 and 2015, which generally increased the number of days to deliver some First-Class Mail in the continental United States. For copies of the Express Mail Label and the official ballot identification tag, see appendix VI. DOD has identified some actions and has taken some steps to address challenges associated with the awareness of voter resources provided by FVAP and the unpredictable postal delivery of UOCAVA ballots. However these long-standing challenges persist, in part, because DOD has not established time frames for completing the actions intended to address them. Standards for Internal Control in the Federal Government specify that management should complete and document actions, including resolutions of audit findings, to remediate challenges on a timely basis. To address its finding that the awareness of voter resources provided by FVAP continues to be limited among UOCAVA voters and stakeholders, DOD identified various actions that it will take to resolve the issue through several reports, including its annual reports, and a press release that FVAP issued along with the results of its qualitative study. For example, the actions identified range from targeting its outreach to first-time voters to conducting more comprehensive analyses of the process used by military and overseas voters when voting absentee. Similarly, DOD identified actions that it would take to address the unpredictable delivery of ballots to overseas voters, such as researching technological innovations that could improve mail processing times and assessing the effect of the newly modernized mail redirection system on the number of undeliverable ballots. Table 1 lists the actions that DOD has identified in response to both categories of voter assistance challenges and the report in which each challenge was identified. DOD’s identification of actions related to FVAP challenges is a key step to addressing the challenges. However, these long-standing challenges persist in part because DOD has not established time frames for completing the actions intended to address the challenges. The documentation we reviewed on DOD’s planned actions did not indicate milestones or completion dates for these actions. DOD officials noted that they establish time frames for election activities in voting action plans, but our review of their most recent 2016 voting action plan contained a list of deadlines related to the election only, and it did not contain time frames for actions to address program challenges. Key management practices call for developing control activities to ensure management’s directives are being met, such as clearly defining the time frame associated with projects and tracking whether the projects are meeting their goals. Specifically, Standards for Internal Control in the Federal Government state that control activities should be designed and implemented to ensure that management’s directives are achieved and that projects should be tracked so that managers can determine whether they are meeting their goals. Further, A Guide to the Project Management Body of Knowledge states that project time management includes the processes required to manage the timely completion of the project, such as defining activities, sequencing activities, estimating activity resources, and estimating activity durations. Without establishing time frames for the actions it identified, DOD will lack the necessary processes to manage the timely completion of improvements to its voting assistance activities, which would help the department achieve FVAP’s stated goals. Further, given the magnitude and complexity of FVAP’s work, establishing time frames would better position DOD to effectively target resources to high-priority initiatives. Finally, time frames would help to provide benchmarks against which DOD can demonstrate FVAP’s progress to Congress and other stakeholders, including through the statutorily required annual reporting process. Stakeholder involvement and performance measures for FVAP are discussed in more detail in the next section of this report. DOD’s implementation of its voting assistance program exhibits some characteristics of the six selected leading practices associated with the initial stages of effective federal strategic planning but has not fully exhibited any of these practices. For example, leading practices—such as defining goals, identifying resources, and using performance measures— are only partially exhibited, in part, because DOD largely plans its activities around federal election cycles, during which it focuses on near- term needs, driven by the upcoming election, and federal fiscal year budgeting cycles, according to officials. Furthermore, as of February 2016, DOD does not have a long-term strategy for its voting assistance program, such as a strategic plan, to help ensure the long-term effectiveness of the program. During our review, we found that DOD’s voting assistance program exhibits some characteristics of each of the six selected leading practices of effective federal strategic planning. Our prior work has identified these leading practices for the initial stages of federal strategic planning, which we derived in part from the Government Performance and Results Act (GPRA), as updated by the GPRA Modernization Act of 2010, associated guidance, and our prior work. Specifically, these leading practices are to: (1) define the mission and goals, (2) define strategies that address management challenges and identify resources needed to achieve goals, (3) ensure leadership involvement and accountability, (4) involve stakeholders, (5) coordinate with other federal agencies, and (6) develop and use performance measures. Table 2 describes the six selected leading practices and the extent to which they are exhibited in DOD’s implementation of FVAP. Below we discuss in more detail our assessment of the extent to which FVAP exhibits the characteristics of each selected leading practice of federal strategic planning. FVAP has recently revised its mission statement, purpose, and strategic goals; however, we rated this leading practice as “partially exhibits” because FVAP has not made them publicly available. Standards for Internal Control in the Federal Government state that management should communicate information externally through established reporting lines so that external parties can help the entity achieve its objectives. FVAP officials referred us to FVAP’s website, which states that the purpose of the program is to ensure that servicemembers, their eligible family members, and overseas citizens are aware of their right to vote and have the resources to do so. However, the purpose statement on FVAP’s website does not match the revised mission statement and DOD Instruction 1000.04 does not clearly define the mission of the program, although it identifies the various voting assistance activities and responsibilities throughout DOD. As previously noted, in 2013, DOD commissioned the RAND Corporation to conduct a study on aligning FVAP’s strategy and operations. The results of the study, which DOD and the RAND Corporation released in October 2015, found, among other things, that FVAP lacked a clearly articulated mission, shared among its staff and stakeholders. For example, the report noted that FVAP thought UOCAVA voters were best served through intermediaries such as voting assistance officers and local election officials. However, the intermediaries identified did not have a similar understanding of their role and connection to FVAP and voters, and were generally unsure of what FVAP was doing and why. In anticipation of the report’s findings and before the final report was issued, FVAP leaders and staff convened an offsite meeting during the summer of 2015 and proactively developed a new mission statement, vision, and strategic goals. FVAP officials provided us with this new mission statement, purpose, and associated strategic goals during our review, but as of January 2016 these new statements had not been made publicly available. According to FVAP officials, the new mission and strategic goals are part of a forthcoming strategic plan, and they do not plan to make them publicly available until after November 2016 to avoid distraction prior to the upcoming presidential election. However, without communicating its updated mission, vision, and strategic goals publicly, FVAP stakeholders may continue to be unclear about FVAP’s purpose and their role in its achievement. In addition, without a consistent understanding between program staff and stakeholders about FVAP’s role, potential UOCAVA voters may not receive information that is needed to help maximize their opportunity to vote in the upcoming 2016 presidential election. We also found that FVAP has not maintained consistent strategic goals for its program. In November 2015, a senior FVAP official told us that FVAP has three broad strategic goals that will help it achieve its mission: (1) reducing obstacles to voting, (2) educating and making voters aware of the voting process, and (3) being a highly valued customer service organization. We reviewed FVAP’s fiscal year 2014-2016 budget justification documents, which contain FVAP’s strategic goals and corresponding performance measures and found that FVAP has changed its strategic goals in fiscal years 2014 and 2015, and changed the performance measures associated with the strategic goals from year to year. In addition, the mission and strategic goals are different than the purpose statement that FVAP shares publicly on its website. Although the changes in FVAP’s strategic goals are not substantial, the frequency with which they have changed, coupled with the fact that FVAP does not share its goals publicly, inhibits FVAP’s ability to track and demonstrate progress over time. Table 3 shows how FVAP’s strategic goals have changed. Further, FVAP has not consistently publicized the aforementioned strategic goals on, for example, its website or in its annual reports to Congress, where they could be made available for FVAP’s customers and stakeholders. Without consistent, publicly available strategic goals and their intended results, FVAP cannot effectively demonstrate to essential stakeholders and potential voters how its efforts are helping to achieve progress toward its goals. DOD has identified some challenges faced by FVAP related to voter awareness. As a result, we rated this leading practice as “partially exhibits.” In our work on performance management, we have previously reported that it is particularly important that agencies develop strategies that address management challenges, outside of their control, that threaten their ability to meet long-term strategic goals. During our review, officials acknowledged that FVAP faces challenges beyond its control, especially related to military and overseas citizens’ interest in voting. To make decisions about populations on which to concentrate voter awareness activities, DOD used the results of post-election surveys to group individuals based on how likely they are to vote. For example, FVAP’s post-election surveys suggest that servicemembers with spouses vote at consistently higher rates than those who are unmarried. DOD officials told us that a significant portion of the program’s budget— approximately $1.2 million of a total $3.5 million to $4 million annually—is used to fund FVAP’s voter awareness campaign, and that the information obtained from post-election surveys will enable FVAP to more effectively target the distribution of outreach materials based on the unique characteristics of each group. However, DOD has not taken similar steps to define strategies or devote resources to address challenges that FVAP has identified related to unpredictable mail processing and its potential impact on the timely transmission of ballots between voters and local election offices. Without identifying strategies and resources needed to address all of FVAP’s identified challenges, FVAP cannot ensure the program’s ability to meet its long-term strategic goals. FVAP’s current Director, whom DOD designated in November 2013, has demonstrated involvement in the program; however, we rated this leading practice as “partially exhibits” because DOD has not established and institutionalized mechanisms to help ensure the accountability of the FVAP Director in achieving program goals. Leading practices suggest that a program’s leadership is responsible for ensuring that strategic planning becomes the basis for day-to-day operations and that formal and informal practices hold managers accountable and create incentives for working to achieve the agency’s goals. Prior to the current Director, FVAP was led by four different Directors from 2008 through 2013, and, according to DOD officials, these leadership transitions were routinely accompanied by changes in program priorities. FVAP’s current Director has demonstrated involvement in the program by taking initial steps to identify issues that may pose challenges to DOD’s voting assistance efforts and to develop a strategic plan. For example, the Director initiated and included program staff in a 3-year study that FVAP commissioned the RAND Corporation to conduct on FVAP. The study found that FVAP staff could not reach consensus about the program’s purpose and its role in the voting community. In response to these results, the Director led staff in the development of the new mission, vision, and purpose statements previously discussed, as well as in the revision of strategic goals for DOD’s voting assistance program. With regard to leadership accountability, FVAP’s Director provides a weekly report on program activities to the Acting Director of the Defense Human Resources Activity, which includes information about media inquiries, inquiries from Congress, and high-profile meetings, among other activities. In addition, the Defense Human Resources Activity tracks FVAP’s budget execution and procurement actions throughout the fiscal year. While the Defense Human Resources Activity requires its programs to submit a mission, goals, and performance measures as part of the budget justification according to a senior official, the official told us that the Defense Human Resources Activity uses that information to identify resource needs only, and not to measure FVAP’s progress toward goals. Further, the official stated that Defense Human Resources Activity does not use formal mechanisms, such as a strategic plan, to hold FVAP or any of its other programs accountable for the achievement of program goals. Without accountability mechanisms, DOD will have a limited ability to maximize the current and future Director’s ownership of and focus on FVAP’s mission and progress. DOD coordinates extensively with some of the stakeholders involved in its voting assistance efforts; however, we rated this leading practice as “partially exhibits” because it has not fully involved all of its stakeholders in the development of FVAP’s mission, goals, and strategies. For example, FVAP involved its stakeholders in the studies it commissioned to identify internal and external challenges, and those studies incorporated the stakeholder perspectives into their findings. However, FVAP did not fully involve stakeholders in the development of its mission and goals. Involving stakeholders in developing a program’s mission, goals, and strategies is important to help ensure that they target the highest priorities, as specified in the leading practices for federal strategic planning. UOCAVA voting is a complex process that involves multiple stakeholders, and DOD officials told us that there are several key stakeholders with whom they routinely communicate. In addition, FVAP and the stakeholders provided a number of examples of coordination and information sharing, such as a monthly teleconference that FVAP holds with voting action officers from all the services to make announcements, share information, and discuss issues related to its voting assistance efforts. In addition, FVAP shares information related to voting assistance with stakeholders on its website, by developing voting awareness materials and public service announcements. FVAP also works with local election offices to facilitate absentee voting under UOCAVA and to help ensure mutual understanding of state- specific absentee voting procedures, in accordance with the DOD instruction. DOD officials also told us that they routinely communicate FVAP-related information to the states via the Council of State Governments. Specifically, in 2013 the Defense Human Resources Activity entered into a cooperative agreement with the Council of State Governments to establish two working groups to advise FVAP—one focused on best practices for absentee voting laws, regulations, and policy for absent uniformed service and overseas voters and the other focused on election technology initiatives. These working groups comprise state and local election officials including secretaries of state, election directors, and voter registrar positions, and in December 2015, developed recommendations to the states to improve the absentee voting process for UOCAVA voters. However, DOD officials also noted that their interactions with other FVAP stakeholders typically occur on an as-needed basis. For example, DOD officials told us that FVAP representatives attend conferences held by state organizations and state and local election officials to share information on UOCAVA voting. We spoke with local election officials in two districts and other stakeholders who similarly told us that FVAP does not involve them directly in its activities; rather, states communicate information—at their discretion—from FVAP to local election offices. As a result, local election officials may not receive FVAP-related information on a consistent basis if it is not shared by state election officials. Further, the October 2015 RAND Corporation report stated that some stakeholders did not clearly understand FVAP’s role and others felt that stakeholder engagement was largely driven by the agendas of agency officials rather than by the agency’s mission. Without involving all of its stakeholders in developing FVAP’s mission, goals, and strategies, program officials cannot ensure that they are optimally targeting the highest priorities for improving voting assistance activities. FVAP coordinates with related federal agencies and entities, including the uniformed services and the Coast Guard, the Election Assistance Commission, the Department of State, and MPSA to help ensure that agencies with a role in the absentee voting process are working toward similar results; however, we rated this leading practice as “partially exhibits” because, while DOD coordinates with these federal entities to provide voting assistance, FVAP has not involved its federal partners in the development of its mission and goals. A senior FVAP official stated that FVAP staff developed only the program’s mission and goals, although the program worked with stakeholders to identify absentee voting challenges, as previously discussed. FVAP carries out its coordination with these agencies and state election officials in accordance with DOD Instruction 1000.04. For example, the instruction requires FVAP, in coordination with the military services, to develop training materials for installation voting assistance offices, unit voting assistance officers, and recruiters to provide voter registration and absentee ballot assistance. FVAP develops these training materials, which include a description of the absentee voting process and the resources available to assist that process, and provides them on its website. FVAP also provides in-person training workshops on installations worldwide. In addition, FVAP holds monthly teleconferences with the service voting action officers—who are responsible for voting assistance operations within their service—in which they discuss issues and plans related to voting assistance. A senior FVAP official provided examples of FVAP’s coordination with the Department of State to leverage data on the overseas citizen population, to quantify the population and identify areas where overseas citizens are concentrated. Specifically, the Department of State provides FVAP avenues to reach overseas citizens with voting process awareness messaging, through their in-country U.S. citizen registration process, Smart Traveler Enrollment Program, embassies, consulates, and warden networks. The official further stated that FVAP considered partnering with other federal agencies that maintain information on overseas citizens, such as the Internal Revenue Service and the Social Security Administration, but did not pursue it due to the sensitive nature of the data those agencies maintain. UOCAVA requires FVAP to coordinate with the Election Assistance Commission and chief state election officials to develop standards for the states to report data on the numbers of ballots transmitted and received during a general election. To carry out this requirement, FVAP partnered with the Election Assistance Commission in 2014 to combine existing surveys for local election offices to report the number of UOCAVA voters who requested and submitted ballots, the methods by which UOCAVA voters requested and submitted ballots, and the rates of and causes for rejected ballots, among other relevant issues. This partnership allows both FVAP and Election Assistance Commission to meet statutory reporting requirements while eliminating duplicate requests for local election officials to provide election data. DOD collects data for three sets of metrics that are intended to evaluate DOD’s voting assistance; however, we rated this leading practice as “partially exhibits” because, according to a senior DOD official, none of these sets of metrics are used to evaluate FVAP’s performance toward the program’s strategic goals. The performance measures identified by FVAP include: Measures of Effect and Performance: DOD Instruction 1000.04 requires FVAP to prescribe metrics for the DOD components and services to use to evaluate their individual voting assistance programs and, to the extent practicable, establish and maintain an online portal to collect and consolidate program metrics. FVAP developed its Measures of Effect & Performance initially in 2011, and updated those measures in October 2014. These measures are intended for the service voting assistance officers to track the assistance they provide, and include counts of the number and types of personnel assisted, the methods of assistance, and the number of forms distributed. A senior FVAP official told us that FVAP uses these metrics to monitor activities and make real-time resource decisions, but did not plan to use these metrics to assess FVAP’s performance toward meeting program goals. In addition to developing and prescribing that the military departments collect these metrics, FVAP developed a portal for the military services to record their program metrics. Under DOD’s guidance, installation and unit-level voting assistance officers in each service are encouraged to collect and record information about the voting assistance they provide on a quarterly basis. These metrics are not linked to FVAP’s strategic goals and are not evaluative. In addition, one voting action officer noted that the measures are not reflective of voting assistance activities. Rather, the measures are tallies that record the number and types of actions taken by voting assistance officers. In 2010, we identified similar limitations in a previous version of FVAP’s Measures of Effect and Performance, including reliability concerns and concerns that the measures were credible to evaluate only some of FVAP’s efforts. In 2013, the DOD Inspector General also reported that FVAP had not applied clearly defined voting assistance program goals and metrics to enable program officials to evaluate program performance and effectiveness, and that the focus of FVAP’s metrics was limited to measuring the level of activity. However, in 2015 the DOD Inspector General reported that FVAP had begun tracking the measures of effect and performance on January 1, 2015, and that those measures were designed to provide FVAP with a more accurate representation of the resources utilized for voting assistance and would help to determine the level and type of assistance that is being sought by servicemembers. Budget Estimate Performance Measures: FVAP identifies strategic goals and related performance measures in its annual budget justification submission for the Defense Human Resources Activity Operation and Maintenance budget estimates. FVAP’s submission includes performance measures because the Defense Human Resources Activity requires that they be included in budget estimates, according to an official. As previously stated, our review of FVAP’s budget estimates for fiscal years 2014-16 indicates that FVAP has changed its strategic goals and the associated performance measures, thus preventing FVAP from assessing or demonstrating its performance over time. In addition, a senior FVAP official told us that these performance measures are aligned with short-term goals that change from year to year based on factors such as the election cycle and program initiatives. One senior-level FVAP official told us that the performance measures listed in the budget estimates do not communicate a full picture of all of the program activities that FVAP is undertaking and thus FVAP does not regularly use them to evaluate the program; rather, the performance metrics are used mostly to meet the information requirements of FVAP’s budget requests. Further, an official from the Defense Human Resource Activity noted that while the office asks the programs for which it has oversight (including FVAP) to identify their performance measures as a budget exhibit, it does not require the programs to demonstrate how the performance measures are used to evaluate progress. Call Center Metrics: FVAP maintains a call center for UOCAVA voters and FVAP stakeholders to submit questions, via phone, fax, email, or FVAP’s website, about all aspects of absentee voting. Once a service that FVAP contracted out to a third party, FVAP officials noted that they recently brought the call center back in house and knowledgeable FVAP staff now manage the center and respond to questions. In addition, FVAP maintains a portal with metrics describing the assistance it provides through the call center. Like the measures of effect and performance, the call center metrics are tallies of the number of inquiries, broken down by method of inquiry, type of caller (military, overseas citizen, local election official, or other), and the nature of the inquiry. Further, the call center metrics include feedback from the caller about satisfaction with the assistance provided. While the call center metrics are initial steps to help FVAP demonstrate how it provides assistance and identify challenges that callers are facing, these metrics do not allow FVAP to track the progress it is making toward its mission and all three of its strategic goals While these three mechanisms help FVAP collect data that enables program officials to monitor voting assistance activities, the data do not measure how well these activities make progress toward FVAP’s goals of (1) reducing obstacles to voting, (2) being a highly valued customer service organization, and (3) educating and making voters aware of the voting process. Without establishing performance measures and using the information that those measures are intended to collect, FVAP cannot track the progress it is making toward its goals to inform decision making or demonstrate progress to its staff, DOD, and stakeholders. According to officials, as of February 2016, FVAP did not have a long- term strategy, such as a strategic plan, to institutionalize ongoing practices and establish accountability for efforts still being developed, such as the partially exhibited leading practices that we have identified above. Instead, FVAP plans its activities in the near-term around federal election cycles, and links its activities to statutory requirements and some challenges that it has identified. FVAP’s most recent strategic plan was published in 2010, but a senior FVAP official told us they stopped using the plan in 2012. FVAP officials also told us that they do not have a current strategic plan because there has been frequent turnover in the program director position, and that the transitions in leadership were often accompanied by changes in priorities. As a result, FVAP operated without a strategic plan to guide its overseas voting assistance efforts throughout the 2012 and 2014 general election cycles. According to a DOD official and as evidenced in its annual reports to Congress, FVAP is accustomed to cyclical planning, by identifying lessons learned about overseas absentee voting following each federal election and applying those lessons to the next election cycle. Instead of updating their previous strategic plan, FVAP’s senior officials told us that they are in the process of developing a new strategic plan that they expect to finalize internally among FVAP staff in the summer of 2016. However, these officials further stated that they do not plan to publish the strategic plan until after November 2016 so as to avoid any distraction from FVAP’s voter assistance responsibilities prior to and during the upcoming presidential election. While FVAP officials provided a timeline for some steps they will take to develop a new strategic plan, they did not have documentation of a draft strategic plan because they said it was early in the development phase. Without a strategic plan that institutionalizes a long-term vision, it will be difficult for FVAP to demonstrate progress in addressing its long-standing challenges, such as those previously discussed. Furthermore, a strategic plan would help to incorporate all of the leading practices of federal strategic planning that can help to ensure that FVAP has a defined and sustainable path through the dynamic voting environment and any future transitions in leadership. Since our first report on FVAP in 2001, DOD has taken steps to improve its assistance to servicemembers and overseas voters. For example, DOD has proactively commissioned studies that identified challenges with FVAP and it has administered and analyzed the results of post-election surveys to identify areas needing improvement. While these are positive steps, some of the challenges identified, such as the limited voter and stakeholder awareness of FVAP resources and the unpredictable postal delivery of absentee ballots, are long-standing issues and continue to persist—in part because DOD has not established time frames for completing identified corrective actions. Having time frames would help DOD to better focus its effort and resources and would provide important benchmarks against which DOD could demonstrate program progress to Congress and other stakeholders, including through the statutorily required annual reports. Further, while DOD’s leadership of FVAP has stabilized since 2013, DOD has not fully implemented the six selected leading practices for federal strategic planning into the day-to-day operation of the program. In addition, DOD has not developed a long-term strategy that could help focus the program and further develop and institutionalize the leading practices that it partially exhibits through future leadership transitions, so it can effectively respond to the changing nature of the voting environment. We are making three recommendations to improve DOD’s management of FVAP. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to establish time frames to complete actions that its Federal Voting Assistance Program has identified it will take to address challenges, and also to use these time frames to demonstrate progress for stakeholders, including through its statutorily required annual reporting. We recommend that the Under Secretary of Defense for Personnel and Readiness, through the Defense Human Resources Activity, direct FVAP’s Director to fully implement the six selected leading practices of federal strategic planning into the day-to-day operations of the program. We recommend that the Under Secretary of Defense for Personnel and Readiness, through the Defense Human Resources Activity, direct FVAP’s Director to complete the development of a strategic plan that fully exhibits the six selected leading practices of federal strategic planning, including, but not limited to: a statement of FVAP’s revised mission and goals; an identification of strategies that address management challenges and resources needed to achieve goals; a description of leadership involvement and accountability; a description of stakeholder involvement in the development of FVAP a coordination strategy to communicate the program’s mission and goals to other federal agencies; and a description of performance measures, aligned with program goals that FVAP will use to track progress toward achieving goals. In commenting on a draft of this report, DOD partially concurred with one of our three recommendations, and concurred with the other two recommendations. DOD’s comments are reprinted in appendix VII. DOD, the U.S. Postal Service, and the MPSA also provided technical comments, which we incorporated, as appropriate. In partially concurring with our first recommendation that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to establish time frames to complete actions that FVAP has identified that it will take to address challenges, and to use these time frames to demonstrate progress for stakeholders, DOD agreed that time frames can provide a “yard stick” for measuring program effectiveness. However, DOD stated that it had not specified time frames because recommendations in FVAP’s reports to Congress are taken for immediate action and election cycles provide a natural timeframe for completion. DOD further highlighted a number of efforts that are complete or underway for the 2016 election cycle and that it plans to include in its next report to Congress. As stated in our report, we recognize that FVAP is accustomed to a cyclical planning process that is largely driven by the 2-year timeframe of the federal election schedule. In addition, we note in our report steps that have been taken to address long-standing challenges such as increasing voter awareness of FVAP resources and mitigating unpredictability in postal delivery. However, these long-standing challenges persist in part because DOD has not established time frames for completing actions intended to address them, as discussed in our report. We believe that, if time frames are not established with the specificity needed to help ensure that actions are completed in a timely manner for implementation into the next election cycle, actions may not be properly prioritized, resources may not be effectively targeted; and decision makers and stakeholders may not have necessary information regarding FVAP’s progress with respect to improvements. In concurring with our second recommendation that the Under Secretary of Defense for Personnel and Readiness, through the Defense Human Resources Activity, direct FVAP’s Director to fully implement the six selected leading practices of federal strategic planning into the day-to-day operations of the program, DOD stated that its formal efforts to implement selected leading strategic planning practices began in 2015 when the RAND Corporation assessed FVAP’s organizational structure and FVAP took initial steps to position itself as a customer-focused service delivery organization. We agree that FVAP’s strategic planning activities began during the review by the RAND Corporation, and in our report note FVAP’s efforts to revise the mission and strategic goals through a staff offsite after initial feedback from the RAND Corporation. In addition, DOD highlighted the following steps it has taken to incorporate the characteristics of strategic planning leading practices that our analysis identified as missing, and provided examples that demonstrate progress in some of these areas. 1. Define the mission and goals: DOD noted that FVAP will make the complete strategic plan publicly available in December 2016. We agree that this is a positive step, and note in our report FVAP’s plan to issue a strategic plan after November 2016 because FVAP wanted to avoid distraction prior to the upcoming presidential election. 2. Define strategies that address management challenges and identify resources needed to achieve goals: DOD stated that most factors that influence the success of a voter are outside the influence or control of DOD. Further, DOD stated that FVAP works to facilitate the voting process and improve areas where it has the ability to affect the process. While we agree that DOD does not control many of the factors that influence the success of a voter to cast a ballot, we note in our report that it is particularly important that agencies specifically develop strategies to address those management challenges that are outside of their control. We believe that, by identifying the challenges that are outside of its control, FVAP can also identify a reasonable level of resources to devote to address certain challenges, or determine how to leverage partnerships with the stakeholders that have more direct control on absentee voting such as states, the voters, and the MPSA. 3. Ensure leadership involvement and accountability: DOD noted that it has a well-established chain of command in carrying out its responsibilities under UOCAVA and those lines of accountability are articulated in a relevant directive and instruction. We disagree with DOD’s implication that the guidance alone ensures leadership involvement and accountability, and have concerns that the strategic planning activities initiated by the current Director could be diminished by a future leadership transition. Officials noted during our review that leadership transitions were routinely accompanied by changes in program priorities. 4. Involve stakeholders: DOD noted that stakeholder involvement was part of the RAND Corporation’s study, during which stakeholder views of FVAP responsibilities were solicited. Further, DOD stated that some of the stakeholder misconceptions about FVAP's role were based on previous communication from FVAP. In our report, we note the involvement of FVAP's stakeholders in the RAND Corporation study and continue to believe that a strategic plan could help FVAP communicate its role consistently and publicly to its stakeholders, to address the misconceptions, such as the perception that FVAP stakeholder coordination was driven by individual agendas, which the RAND study uncovered. 5. Coordinate with other federal agencies: DOD stated that it coordinated with other federal agencies, which were also included in the RAND study, in the development of its mission, goals, and strategies. These efforts led to the newly defined and focused strategic goals. We do not have evidence from the stakeholders we spoke with, listed in appendix III, that the RAND Corporation or FVAP consulted them specifically regarding the development of FVAP's mission, goals, and strategies. 6. Develop and use performance measures: DOD stated that FVAP and each of its employees are evaluated by yearly performance measures tied to the work conducted on a daily basis. We disagree that annual performance evaluations for individual staff constitute performance measures that help FVAP measure progress toward achieving its strategic goals, even when these performance measures are tracked, as DOD suggests, upward from the individual to the FVAP office, the Defense Human Resources Activity, the Under Secretary of Defense (Personnel and Readiness), and overall DOD strategies and goals. In addition, DOD noted that FVAP will refine its analysis of metrics collected by service voting assistance officers and compare the numeric values collected with historical values over time. However, the metrics we reviewed were a tally of numbers associated with a type of assistance or service provided. Those metrics did not contain a baseline to indicate how such assistance efforts compared with assistance needed or with the size of the UOCAVA voter population. In concurring with our third recommendation to complete the development of a strategic plan that fully exhibits the six selected leading practices of federal strategic planning, DOD stated that FVAP embraces GAO’s six selected leading practices of strategic planning, and that, as noted earlier, it will publicly issue a final strategic plan in December 2016. While FVAP’s identification of a time frame for issuing its strategic plan demonstrates progress, it also indicates that FVAP will not have a plan to guide its work through another presidential election. We noted in our report that the program did not have a strategic plan during the 2012 and 2014 general election cycles. Further, in its comments, DOD took issue with our statement that the lack of a strategic plan has hindered FVAP’s ability to respond to challenges faced in the military and overseas citizen voting environment. We disagree and continue to believe that, as discussed in our report, it will be difficult for FVAP to demonstrate progress in addressing its long-standing challenges without a strategic plan. Further, we continue to believe that a publicly available strategic plan will help FVAP communicate its role to stakeholders and customers, clearly state its program goals, and identify the metrics that FVAP will use to measure progress toward its goals and to mitigate challenges. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretaries of the military departments, and the Commandant of the U.S. Marine Corps. This report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. We reviewed 2008-14 general election survey data from the Election Assistance Commission, which show that the turnout of voters covered under the Uniformed and Overseas Citizens Absentee Voting Act has fluctuated between the mid-term and presidential elections, and that states have received a higher number of ballots from voters covered by the Act during presidential elections. In table 4 we show that, of the ballots that local election offices received, counted, and rejected during each election, the state-provided absentee ballots were the most common in each category. Further, the usage of Federal Write-in Absentee Ballots, FVAP’s write-in backup ballot for eligible UOCAVA voters who have not received their requested ballots at least 30 days before federal elections, increased between the 2008 and 2012 presidential elections. Specifically, local election offices received, counted, and rejected Federal Write-in Absentee Ballots at higher rates in the 2012 presidential election than the 2008 presidential election. The Election Assistance Commission reported that local election offices rejected absentee ballots primarily because they had received those ballots after the deadline in each election. Local election offices also rejected ballots for other reasons, such as because a voter’s name or address did not match that voter’s registration or because local election offices did not have an absentee ballot registration on file. We have made 12 recommendations related to military and overseas voting and the Federal Voting Assistance Program (FVAP) to the Department of Defense (DOD) between 2001 and 2010. All recommendations have been closed. Of the recommendations we made, eight were closed as implemented and four were closed as not implemented. In table 5, we list the 12 recommendations and summarize the status of each recommendation at the time we closed it. To determine the extent to which the Department of Defense (DOD) has identified challenges associated with its military and overseas absentee voting assistance efforts and developed plans to address those challenges, we interviewed program officials at the Federal Voting Assistance Program (FVAP); obtained relevant documents, studies, and data; and discussed their voting assistance efforts, including challenges and planned corrective actions. We interviewed the senior service voting representative and voting action officers from each of the military services, including the U.S. Coast Guard, to discuss their coordination with FVAP and management of the service voting assistance activities. We also contacted officials at other DOD organizations, the military services, other executive branch agencies, and nongovernmental organizations to discuss the challenges that FVAP faces in providing voting assistance to military and overseas voters. In table 6 we list the DOD entities, other federal agencies, and other organizations that we contacted for this review. We reviewed reports that resulted from two studies on FVAP that DOD commissioned and that were issued in 2015 and 2014, respectively. The first study, conducted by the RAND Corporation, examined FVAP’s internal operations; the second study, conducted by Lake Research Partners, identified challenges faced by overseas absentee voters, their eligible family members, DOD’s voting assistance officers, overseas citizens, and local election officials. We interviewed staff from the RAND Corporation that conducted the study of FVAP’s internal operations to discuss their views of the program and its challenges. We reviewed documentation that FVAP provided from the Lake Research Partners study, including transcripts from focus groups with voters covered under the Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA), including servicemembers, their family members, and overseas citizen to identify the absentee voting challenges faced by those FVAP customers. We also reviewed and analyzed the results of post-election surveys that FVAP had conducted with the Defense Manpower Data Center and the Election Assistance Commission between 2008 and 2014. The surveys are used to determine participation in the electoral process by UOCAVA voters; assess the impact of FVAP’s efforts to simplify and ease the process of voting absentee; evaluate progress made to facilitate absentee voting; and identify remaining obstacles to voting by individuals covered by UOCAVA. These surveys include questions for the voter about the methods voters used to cast a ballot and the effectiveness of the information sources the voters consulted. The surveys also collect data from local election officials regarding the numbers of absentee ballots processed and the reasons for rejection. We determined that that FVAP’s surveys with the Defense Manpower Data Center and the Election Assistance Commission were sufficiently reliable for the purposes of our report. We reviewed communication plans and a media engagement plan that FVAP uses to promote awareness of its resources. We reviewed our previous reports on DOD’s FVAP, including the recommendations that had resulted from those reports and the status of those recommendations. In addition, we learned about challenges faced by local election officials by attending an Election Data Summit sponsored by the Election Assistance Commission, in which election officials from fourteen states spoke about challenges they face. In addition, we interviewed local election officials in Virginia and Colorado, among the states with large populations of UOCAVA voters, to discuss their perspectives on challenges associated with overseas absentee voting. We compared FVAP’s identification of and plans for addressing challenges with applicable internal control standards and relevant program management criteria. Specifically, we reviewed the plans and actions that FVAP had identified to address challenges in post-election survey reports and press releases. We compared FVAP’s actions with Standards for Internal Control in the Federal Government, which call for agencies to complete and document actions to remediate challenges on a timely basis. We also compared those actions with the Project Management Body of Knowledge practice for identifying time frames associated with projects to determine whether projects are meeting their goals. The PMBOK® Guide provides guidelines for managing individual projects, including developing a project management plan. We interviewed officials from the U.S. Postal Service and the Military Postal Service Agency to discuss the process for identifying and transmitting UOCAVA election materials and ballots, and to determine how those agencies track the transit time for UOCAVA ballots. We also obtained information regarding Military Postal Service Agency’s process for developing recommended dates by which UOCAVA voters should mail their completed ballots in order for those ballots to return in time for local election offices to count, and compared the 2016 recommended mailing times with related statutory requirements for the transmission of ballots to voters. To determine the extent to which has DOD implemented strategic planning practices to help ensure the long-term effectiveness of FVAP, we reviewed documentation of FVAP’s long-term planning, including information FVAP provides on its public website; annual budget estimates FVAP submits to the Defense Human Resource Agency, which include strategic goals and performance measures that FVAP intends to measure progress toward those goals; FVAP’s annual reports to Congress, which include FVAP’s goals and actions it intends to take to meet those goals; and documentation of the performance metrics FVAP uses to collect data to monitor its activities. We also reviewed a cooperative agreement between the Defense Human Resources Activity and the Council of State Governments related to FVAP; commissioned reports from the RAND Corporation and Lake Research Partners; and annual reports from 2008- 14 that the DOD Inspector General conducted on FVAP and the services’ implementation of their respective voting assistance programs. We interviewed relevant DOD and military service officials, as well as key stakeholder officials from the Election Assistance Commission, the Department of State, U.S. Vote Foundation, and selected local election offices, among others, to discuss their coordination with FVAP and their knowledge of FVAP’s long-term planning activities. We compared these activities with leading practices for strategic planning that we have identified in prior work, informed, in part, by requirements from the Government Performance and Results Modernization Act. Specifically, in prior work, we identified six leading practices in federal strategic planning by reviewing (1) the Government Performance and Results Act (GPRA) of 1993, as updated by the GPRA Modernization Act of 2010 (2) associated Office of Management and Budget (OMB) guidance; and (3) related leading practices that we have identified in past work. We selected the six leading practices because, according to officials, FVAP’s current strategic planning efforts are in the initial planning stage, and we judged these practices to be the most relevant to evaluating FVAP’s strategic planning activities to date. To assess whether FVAP planning activities exhibited each of the six selected leading practices in federal strategic planning, two analysts independently conducted a content analysis of documents related to DOD’s plans for its UOCAVA voter assistance program to determine the extent to which they exhibited six selected leading practices of federal strategic planning. The analysts independently rated each of the six selected leading practices as “exhibits,” “partially exhibits,” or “does not exhibit” by DOD. We determined that DOD “exhibits” a leading practice for federal strategic planning when FVAP’s activities explicitly addressed all characteristics set forth in the leading practice and determined that DOD “partially exhibits” a leading practice when FVAP’s activities addressed at least one or more characteristics of the leading practice, but not all characteristics of the leading practice. Finally, we determined that DOD “does not exhibit” a leading practice when FVAP’s activities did not address any characteristics of the leading practice. We compared the two sets of independent observations, discussed how each analyst arrived at the assigned rating, and collectively reconciled any rating differences. We conducted this performance audit from June 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Federal Post Card Application acts as a registration and absentee ballot request form for absent servicemembers, their families, and citizens residing outside the United States. The Federal Voting Assistance Program (FVAP) provides the Federal Post Card Application to the services for distribution to overseas voters. In addition, on its website, FVAP compiles and distributes descriptive material on state procedures related to the application and instructions for completing and sending the application. The Federal Write-in Absentee Ballot is a write-in backup ballot that absent servicemembers, their families, and citizens residing outside the U.S. can complete to vote in federal general elections. The Federal Voting Assistance Program (FVAP) provides the ballot to the services for distribution to overseas voters. In addition, FVAP compiles descriptive material on state procedures related to the ballot and provides instructions for completing and sending the ballot on its website. Servicemembers in overseas military postal locations can use Label 11- DOD to return envelopes containing completed absentee ballots via Express Mail service—the fastest mail service offered by the U.S. Postal Service—for regularly scheduled federal general elections. The U.S. Postal Service produces and supplies the label to the Department of Defense (DOD). The Joint Military Postal Activity, which monitors all postal functions for military post offices with Army Post Office, Fleet Post Office, Armed Force America, and Armed Forces Europe designations, supplies the Label-11 DOD to all Army and Fleet Post Offices. Tag 191 is developed and distributed by the U.S. Postal Service, and used by election officials to identify ballot mail prepared at the First-Class Mail or Standard Mail rates and addressed for domestic or international delivery. Although use of the tag is optional, it provides greater visibility to containers of ballot mail as they move through Postal Service processing and distribution operations. In addition to the contact named above, Kimberly Mayo, Assistant Director; Sara Cradic; Alana Finley; Rebecca Gambler; Stephanie Heiken; Tom Jessor; Mae Jones; Tamiya Lunsford; Michael McKemey; Amanda Miller; Terry Richardson; Michael Shaughnessy; Amie Lesser; and Leslie Stubbs made key contributions to this report. U.S. Postal Service: Actions Needed to Make Delivery Performance Information More Complete, Useful, and Transparent. GAO-15-756. Washington, D.C.: September 30, 2015. Elections: Observations on Wait Times for Voters on Election Day 2012. GAO-14-850. Washington, D.C.: September 30, 2014. U.S. Postal Service: Information on Recent Changes to Delivery Standards, Operations, and Performance. GAO-14-828R. Washington, D.C.: September 26, 2014. Elections: Issues Related to State Voter Identification Laws. GAO-14-634. Washington, D.C.: September 19, 2014. Elections: State Laws Addressing Voter Registration and Voting on or before Election Day. GAO-13-90R. Washington, D.C.: October 4, 2012. Elections: DOD Can Strengthen Evaluation of Its Absentee Voting Assistance Program. GAO-10-476. Washington, D.C.: June 17, 2010. Elections: Action Plans Needed to Fully Address Challenges in Electronic Absentee Voting Initiatives for Military and Overseas Citizens. GAO-07-774. Washington, D.C.: June 14, 2007. Elections: All Levels of Government Are Needed to Address Electronic Voting System Challenges. GAO-07-576T. Washington, D.C.: March 7, 2007. Elections: DOD Expands Voting Assistance to Military Absentee Voters, but Challenges Remain. GAO-06-1134T. Washington, D.C.: September 28, 2006. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006. Elections: Absentee Voting Assistance to Military and Overseas Citizens Increased for the 2004 General Election, but Challenges Remain. GAO-06-521. Washington, D.C.: April 7, 2006. Election Reform: Nine States’ Experiences Implementing Federal Requirements for Computerized Statewide Voter Registration Lists. GAO-06-247. Washington, D.C.: February 7, 2006. Elections: Views of Selected Local Election Officials on Managing Voter Registration and Ensuring Eligible Citizens Can Vote. GAO-05-997. Washington, D.C.: September 27, 2005. Elections: Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under Way, but Key Activities Need to Be Completed. GAO-05-956. Washington, D.C.: September 21, 2005. Elections: Additional Data Could Help State and Local Elections Officials Maintain Accurate Voter Registration Lists. GAO-05-478. Washington, D.C.: June 10, 2005. Department of Justice’s Activities to Address Past Election-Related Voting Irregularities. GAO-04-1041R. Washington, D.C.: September 14, 2004. Elections: Electronic Voting Offers Opportunities and Presents Challenges. GAO-04-975T. Washington, D.C.: July 20, 2004. Elections: Voting Assistance to Military and Overseas Citizens Should Be Improved. GAO-01-1026. Washington, D.C.: September 28, 2001. Elections: The Scope of Congressional Authority in Election Administration. GAO-01-470. Washington, D.C.: March 13, 2001.
The Uniformed and Overseas Citizens Absentee Voting Act generally protects the rights of military personnel and overseas citizens to register and vote absentee in federal elections. In 2014, the most recently completed federal election, the Election Assistance Commission estimated that around 6 percent, or 8,500 of the 146,000 ballots submitted by voters covered under the act, were rejected. DOD's Federal Voting Assistance Program (FVAP) generally implements many of the act's provisions and provides absentee voting support. GAO was asked to review matters related to FVAP. GAO assesses the extent to which DOD has (1) identified challenges with its military and overseas voting assistance efforts and developed plans to address those challenges, and (2) implemented strategic planning practices to help ensure the long-term effectiveness of FVAP. GAO reviewed 2010-14 post-election surveys, 2014-15 DOD-commissioned studies, and compared documentation of FVAP plans with leading federal strategic planning practices; and interviewed FVAP officials and program stakeholders. The Department of Defense (DOD), through its Federal Voting Assistance Program (FVAP), has taken steps to identify challenges and needed improvements to its military and overseas absentee voting assistance efforts. However, two long-standing issues—limited awareness of resources for voters and the unpredictable postal delivery of absentee ballots—continue to pose challenges. DOD-commissioned studies and post-election survey results indicate that there is limited awareness of FVAP's resources among military and overseas voters. A 2015 study found, for example, that the online availability of blank ballots led to one of the most significant improvements in military and overseas absentee voting. At the same time, the full benefits of the improvement had not been realized because voters remained unaware that ballots could be requested online. Regarding the unpredictable postal delivery of absentee ballots, the timeliness of a voter's receipt or return of an absentee ballot depends on a number of variables, such as the mode and speed of transportation used to transmit mail. DOD has identified actions that it will take to address these and other issues. However, these challenges persist, in part, because DOD has not established time frames for completing the actions it has identified. DOD's implementation of FVAP partially exhibits six selected leading practices of federal strategic planning. As shown below, the program exhibits some, but not all, of the characteristics that make up each practice. According to officials, as of February 2016, DOD did not have a long-term, comprehensive strategy, such as a strategic plan, for its voting assistance program, to institutionalize existing practices and establish accountability for efforts that need further development—such as those related to the partially exhibited leading practices identified. Without a comprehensive strategic plan that institutionalizes a long-term vision, it will be difficult for FVAP to respond to the dynamic nature of the voting environment and frequent turnover in program leadership, and to demonstrate progress in addressing its long-standing challenges. GAO recommends that DOD establish time frames for actions FVAP identified to address challenges, fully implement the selected leading practices of federal strategic planning into its day-to-day operations, and develop a strategic plan that fully exhibits the six selected leading practices of federal strategic planning. DOD generally concurred with GAO's recommendations.
Federal programs for bridge construction, reconstruction, and repair are authorized in surface transportation acts. In 2012, MAP-21 consolidated a number of existing highway formula programs, including the Highway Bridge Program (HBP). Bridge projects are now generally funded through the National Highway Performance Program (NHPP) and the Surface Transportation Block Grant Program (STBGP). MAP-21 included a number of statutory requirements related to transforming the surface transportation system to a performance-based approach. For instance, MAP-21 directed DOT to establish performance measures related to highway safety, asset condition, and highway system performance, among other things. In some cases, MAP-21 required DOT to use the rulemaking process to implement performance-based requirements. In 2015, the Fixing America’s Surface Transportation Act (FAST Act), which reauthorized surface transportation programs, largely maintained current program structures, including MAP-21’s overall performance- management approach. The FAST Act also expanded the eligibility of NHPP funds to be used for reconstruction, resurfacing, restoration, rehabilitation, or preservation of a non-National Highway System (non- NHS) bridge if the bridge is on a Federal-aid highway. FHWA is the agency charged with oversight of the condition of the nation’s bridges. FHWA administers the federal-aid highway program that provides about $40 billion each year to states to design, construct, and preserve the nation’s roadway and bridge infrastructure. These funds are distributed through annual apportionments established by statutory formulas. FHWA oversees the federal-aid highway program primarily through its 52 Division Offices located in each state, D.C., and Puerto Rico. FHWA Division Offices have 10 to 61 staff each, depending on the size of the state’s highway program and other factors. As of June 2016, FHWA had approximately 2,800 staff—about two-thirds in the field and the remaining third at FHWA headquarters. FHWA distributes and tracks federal funds for highway and bridge projects and collects some data to estimate annual spending by state and local governments on highway and bridge projects. Specifically, FHWA tracks federal-aid highway program obligations in FMIS, for individual project segments or contracts. This allows FHWA to collect and report information on the types of activities (such as obligations for the construction of new bridges) funded with Highway Trust Fund monies. Although federal funding is provided to states to help improve highway infrastructure, state and local agencies own and maintain most of the nation’s bridges. State and local agencies typically provide matching funds on bridge projects that receive federal funding and may contribute funds beyond their match amount. State-level DOTs are responsible for ensuring bridge inspections are completed and for inventorying bridges within their states according to federal standards (except for tribally or federally owned bridges). State DOTs and local-planning organizations have discretion in determining how to allocate available federal funds among various projects and are responsible for selecting highway projects, including bridge projects. FHWA collects some data to estimate annual spending by state and local governments on highway and bridge projects. Specifically, FHWA requests that state DOTs submit several forms to the Office of Transportation Policy Studies on a regular basis, such as: Form 532, State Highway Expenditures—submitted annually, it requests the total spent on all highways by the state, including bridges; bridges are not separately reported. Form 536, Local Highway Finance Report—submitted biennially, it requests the total spent on all highways by all units of the state’s local governments. Bridges are not separately reported. Acknowledging difficulties in obtaining data from local agencies, FHWA recommends that states use sampling and estimation to prepare this form, such as collecting data from a selection of local governments and then expanding the sample to generate statewide totals. Form 534, State Highway Capital Outlay and Maintenance Expenditures—submitted annually, it requests bridge-specific and other highway outlays. This form is designed to complement the data in Form 532 by classifying the highway expenditures of states into improvement types, such as new construction and rehabilitation, among other things. As part of its oversight role, FHWA collects information from states on bridge conditions and maintains this data in its NBI database. Bridges that receive low inspection ratings on specific bridge elements are classified as deficient. Bridges may be classified as deficient for one of two reasons: A structurally deficient bridge has one or more structural components, such as the deck that directly carries vehicles, in poor condition. Structurally deficient bridges often require maintenance and repair to remain in service. A functionally obsolete bridge has a configuration or design that may no longer be adequate for the traffic it serves, such as being too narrow or having inadequate overhead clearance. Functionally obsolete bridges do not necessarily require repair to remain in service. A bridge that is both structurally deficient and functionally obsolete is listed as structurally deficient in the NBI. In this report, we assess the conditions of bridges classified as structurally deficient by both the total deck area and number of bridges. Analysis of conditions based on the total number of bridges without considering the size of bridges can create an incomplete picture. A state may have a large number of deficient bridges, but if the deficient bridges are small bridges, the total deck area in need could still be relatively low. In comparison, another state could have very few deficient bridges, but if those deficient bridges are large, the total deck area in need could be much higher. Bridges may vary significantly in size and generally, the needs of larger bridges are more costly than those of smaller bridges. Measuring the total deck area, which accounts for the size of a bridge, provides more information than counting the number of bridges (see fig. 1). We found that bridge conditions, as indicated by data in the NBI, have improved nationwide over the past 10 years, as measured by total deck area and number of bridges that are structurally deficient. The percentage of structurally deficient deck area and bridges declined along the same trajectory from 2006 to 2015. Specifically, the deck area on bridges classified as structurally deficient decreased from 9 percent to 7 percent, and over the same time period, structurally deficient bridges, by number of bridges, decreased from 13 percent to 10 percent (see fig. 2). Bridge owners have broad discretion in determining how to address bridge needs, but statutory requirements enacted in 2012 directed that states allocate some federal funds to bridges if states do not meet specified standards. FHWA does not issue guidance on which bridges to target with federal funds, such as specifically targeting structurally deficient bridges. However, MAP-21 contained a penalty provision where any state whose percentage of total deck area of bridges on the NHS classified as structurally deficient exceeds 10 percent for 3 years in a row must devote funds (equal to 50 percent of the state’s fiscal year 2009 HBP apportionment) to eligible projects on bridges on the NHS until they meet this minimum threshold. FHWA officials told us they plan to use bridge condition data from 2014 through 2016 to determine if a penalty is to be applied to the states, and begin imposing this penalty in 2017 if needed. Despite overall improvements, among states there is variation in bridge conditions. Specifically, our review of 2015 NBI data shows that some states have substantially higher percentages of deck area on bridges classified as structurally deficient than others have (see fig. 3). For example, 21 percent of the total deck area in Rhode Island, affecting 23 percent of the 766 bridges in the state, is structurally deficient. While in Texas, less than 2 percent of the total deck area, affecting less than 2 percent of the state’s 53,209 bridges, is structurally deficient. Most but not all states have made some improvements in reducing their percentage of deck area on bridges classified as structurally deficient over the past 10 years. Forty-one states, D.C., and Puerto Rico reduced the percentage from 2006 to 2015. Rhode Island had the greatest reduction, going from over 40 percent to over 20 percent of total deck area on bridges that are structurally deficient. However, in 9 states the percentage increased from 2006 to 2015. Delaware had the highest increase in the percentage of deck area on bridges classified as structurally deficient, going from 2 percent to almost 6 percent. GAO has reported that reducing structurally deficient bridges may not always be a state’s highest priority. For example, a state may have other priorities for bridge projects such as seismic retrofitting. According to AASHTO representatives, states use their judgment in deciding how to prioritize their funding for bridge projects. See appendix II for more information about the percentages of bridges and total deck area that are structurally deficient in each state. The number of bridges and amount of total deck area increased dramatically from the 1950s through 1970s. The average age of bridges nationwide is 45 years, based on our analysis of NBI data. According to FHWA, the design life of the majority of existing bridges is 50 years, though bridges have life spans that are dependent on factors such as materials, environment, level of use, and level of maintenance. Also according to FHWA, new design guidelines and construction materials may raise the expected service life of new bridges to 75 years or longer. However, states and other bridge owners are faced with significant challenges in addressing the needs of existing bridges. In the 1950s, at the beginning of the Interstate-era, through the 1970s, the number of bridges constructed in the United States as well as the total square footage of bridge deck constructed increased greatly (see fig. 4). Analysis of NBI data indicates that the large number of bridges built during that time has led to an increase in the need to address those bridges that are now structurally deficient. Specifically, as shown in figure 5, the levels of structurally deficient total deck area are greatest for those built from 1960 through 1974, during which years the total deck area of bridges built in the United States peaked. The increased total deck area of bridges built after the 1950s suggest that an increase in bridges with structural deficiency may be expected and thus would increase the need for bridge maintenance, replacement, or rehabilitation. Federal funds obligated for bridges have remained relatively stable over the last 10 years, between $6 billion and $7 billion annually in most years (see table 1). However, total federal obligations for bridges were notably higher in 2 years (2009 to 2010) due to an influx of funds from the American Recovery and Reinvestment Act of 2009 (Recovery Act). Prior to 2013, the majority of obligations for bridges came from the HBP. Since 2013, such obligations have mostly come from the NHPP and STP. In the last 10 years, federal obligations have shifted somewhat from building new bridges to projects that preserve existing bridges. Based on our analysis comparing 2006 to 2015 obligations, the types of improvements made to bridges have somewhat changed (see fig. 6). For example, fewer federal obligations were directed to bridge replacements in 2015 than in 2006 (decreasing from 57 percent in 2006 to 48 percent in 2015). Also, fewer obligations went toward new bridges in 2015 than in 2006 (from 15 percent to 13 percent). Additionally, more obligations went toward bridge rehabilitation work—major work required to restore the structural integrity of a bridge or necessary to correct major safety defects—in 2015 than in 2006 (this increased from 23 percent of obligations in 2006 to 28 percent in 2015). Finally, the percentage of obligations used for preventative maintenance increased from 2006 to 2015 (from 6 percent to 11 percent). This is partly because more preventative maintenance activities such as bridge cleaning, painting steel bridges, sealing concrete, and repairing or replacing deck joints, became eligible for federal bridge program funding in 2006. Based on data collected from state and local governments, FHWA reported that total estimated spending on bridges increased in recent years, from about $11.5 billion in 2006 up to about $17.5 billion in 2012 (see table 2). Analysis of this FHWA data suggests that state and local funding for bridges has increased. FHWA tracks both the condition of bridges and the funding targeted to them, as described below. As part of its oversight role, FHWA seeks to ensure that states comply with the NBIS, which details the process for and frequency of bridge inspections. FHWA also collects bridge condition data from states and maintains the NBI, the primary source of information on the nation’s bridges. The NBI contains information on each bridge, such as its location, size, age, condition, and inspection dates. FHWA (1) maintains data on total federal obligations dedicated to bridges each year; (2) periodically estimates the contributions from state and local agencies through data collection efforts; and (3) periodically reports to Congress its estimates of total funds dedicated to bridges (including state and local funds) in its Conditions & Performance Report, issued roughly every 2 years. The report also estimates future-spending needs to maintain or improve current conditions and performance. However, FHWA currently lacks a mechanism for tracking the relationship between the invested funds and the corresponding outcomes— maintained and improved bridge conditions. Given that FHWA already estimates total funds dedicated to bridges and collects data on bridge conditions nationwide, it has the information needed to create performance measures that would demonstrate the link between federal funding and the outcomes for bridges. According to leading practices for government management identified by OMB and GAO, agencies should not only have and report performance measures but also use them to link outcomes with resources invested. Specifically, the Government Performance and Results Act (GPRA) of 1993 and the GPRA Modernization Act of 2010 establish the framework for performance management in the federal government. Under this framework, federal agencies are required to, among other things, assess whether relevant programs and activities are contributing as planned to established goals. Further, MAP-21 included a declaration on the importance of accountability and linking performance outcomes to investment decisions. We have reported that linking performance outcomes with information on resources invested (i.e., data on the resources used to produce an outcome, including costs) can help agencies to more clearly understand how changes in invested resources may result in changes to performance. We have also reported that an effective way to show the relationship between resources invested and outcomes is for agencies to use efficiency measures. These measures are typically defined as the ratio of two elements: a program’s inputs (such as its costs or hours worked by staff), to its outputs or outcomes (see fig. 7). OMB has issued guidance with examples of meaningful performance measures, including some efficiency measures: for the Forest Service, cost per acre of environmentally important forest protected (provides costs per acre, including actual program obligations and other dedicated funds); for the Patent and Trade Office, cost per patent processed (provides costs per patent, including staff expenses and overhead costs); and for the Office of Child Support Enforcement, total child support dollars collected per dollar of program expenditures (provides outcomes— dollars in child support collected—per total administrative expenditures including staff expenses). However, determining inputs—or invested resources—for efficiency measures can be challenging when there are non-federal entities contributing resources. Despite the usefulness of efficiency measures, we have acknowledged that many of the outcomes for which federal programs are responsible are part of broader efforts involving federal, state, local, and other partners, and thus it can be difficult to isolate a particular federal program’s contribution to the broader outcomes. This is the case for highway programs, since funds from federal, state and local sources all contribute to maintained or improved asset conditions. However, federal guidance exists that may help. To assist agencies in implementing the GPRA framework, OMB issued guidance about how federal agencies might address the challenge of developing performance measures for programs that co-mingle funds from different sources (i.e., federal, state, and local funds) in support of a broad goal. The guidance acknowledged that it can be difficult to assess the marginal impact of the federal investment for programs where combined co-mingled funding contributes to the same broad performance outcome, but recommended that agencies should nonetheless seek to assess the marginal impact of the federal investment to the overall outcomes. OMB guidance noted that in such cases, the resource inputs from non-federal partners may be relevant in assessing the effectiveness of programs matched by federal assistance. OMB suggested that in such cases, agencies should consider crafting two performance measures of efficiency: one measure reporting unit costs in terms of output per federal dollar spent and another measure reporting unit costs in terms of the output per combined dollars spent. FHWA officials told us that they have not developed measures linking resources to outcomes. This is mostly due to limitations of the previous version of their financial tracking system, FMIS. Specifically, officials explained that prior to the most recent version of FMIS (Version 5), which was launched in October 2015, data were collected on a project segment level that may have included multiple bridges. Thus, it was not possible to directly compare federal obligations on bridge projects to outcomes, in the form of bridge conditions found in the NBI. However, when asked, officials said that such a comparison could be possible with the newest version of FMIS by creating a connection between FMIS and the NBI and showing what happens to bridge conditions when federal obligations change over time. Using such performance measures would help FHWA to demonstrate the link between federal funding and outcomes for bridges. As FHWA has reported in recent budget requests, states face increasing challenges in finding sufficient funding for their infrastructure needs. In addition, as GAO has previously reported, bridge infrastructure—like most of the nation’s physical infrastructure—is under strain. Steady increases in road usage, congestion, and the aging of the nation’s bridges will likely continue to present challenges in the future. Most of the state government officials we interviewed reported that, consistent with FHWA data, bridge funding has been stable since the federal bridge program was consolidated into other programs in 2012. We interviewed officials from 24 states and D.C., and officials from 21 states and D.C. told us there had been no change in funding their bridge programs in the last 4 years. Officials from 3 states reported an overall increase in bridge funding since that time, although officials from 2 of those states indicated that the increase in bridge funding was not necessarily a result of federal changes. The general stability in bridge funding may be a result of the long time frame for programming bridge projects, which could create a lag in funding levels’ response to policy changes. AASHTO representatives told us it is difficult to judge the impact of federal statutory changes on bridges because of the long-term nature of infrastructure projects. Ten states and D.C. provided us with examples of bridge-programming cycles of 5 years or greater. For example, Ohio DOT officials told us that they program their bridge projects 6 years into the future. Through this process, state officials determine their project needs and request a planned allocation for the 6th year of the funding cycle. With this type of long-term planning and budgeting process, it may take several years for a change in federal policy to have a noticeable effect on bridge projects’ funding. Officials from some selected states reported increased flexibility in their ability to use federal funds for bridges. In addition to allowing states the flexibility to determine whether to spend federal highway funds on bridges or other highway needs, changes provided by MAP-21 gave states flexibility to use federal funds for a greater range of bridge projects. Prior to MAP-21, only bridges that met certain criteria—such as being rated below a certain threshold or not having received federal funds in the previous 10 years—could receive federal funds. Officials from 10 of the 24 states and D.C. mentioned the increased flexibility in using highway funds for bridge projects since MAP-21. See table 3 for examples of how states used the increased flexibility. Officials from most selected states told us there have been no changes in prioritizing bridges relative to other transportation assets. Specifically, officials from 18 states and D.C. reported that they give bridges the same priority as they did prior to MAP-21. Officials from several of these states said that bridges have remained a high priority because of safety concerns. For example, an official from the New York DOT said that there is a keen awareness of what happens when bridges are not maintained, citing the state’s major bridge failures in the 1980s—the Mianus River Bridge in 1983 and the Schoharie Creek Bridge in 1987, and thus bridges have remained a priority over time in New York. Though most states have reportedly not changed the way they prioritize bridges, officials from 2 states told us that bridges’ relative priority may change after they implement performance management principles. For example, California officials told us they are transitioning toward a performance-based management approach where the needs of different transportation assets, including bridges and pavement, will be weighed against each other in order to meet performance targets within budgetary constraints. According to officials, a possible outcome is that local agencies in California may need more funding to repair their pavement or other assets to meet performance targets, which have yet to be determined through the FHWA rulemaking process on performance measures that is under way. Further, officials stated that these changes could have an impact on future bridge funding and relative priority. Likewise in Iowa, officials said that the state is moving toward using asset management principles in future decision making, which will involve more comparisons across different types of projects. Officials from a majority of the states and local agencies we interviewed cited inadequate funding as a challenge for their bridge programs. Of the officials we interviewed from 24 states and D.C., officials from 14 described inadequate funding as a challenge. See table 4 for examples of challenges of inadequate bridge funding cited. Local agency officials also discussed inadequate funding as a challenge for their bridge programs. Officials from 6 of the 10 local agencies we interviewed mentioned that inadequate funding for bridges was a challenge. For example, officials at the Oklahoma City Department of Public Works reported that many needed bridge projects are delayed because they lack sufficient funds. Further, they are only able to address the most critical needs due to limited funding. Transportation officials in Seattle, Washington, told us that the state DOT distributes a total of about $35 million per year in federal funds to local agencies, which compete for a part of those funds; however, the city’s highest priority bridge has a replacement cost of about $350 million, which far surpasses what they may receive. Given the gap in funding for large projects, officials said they will be forced to close large bridges that are deemed unsafe if they are unable to raise the funds needed to repair them. Some state and local officials reported that many bridges are reaching the end of their intended service life. According to officials from several states and local agencies, most bridges were designed to last 50 years. Officials from 13 of the states we interviewed reported aging bridges as a challenge. For many of these states, the challenge of aging bridges is intertwined with the challenge of inadequate funds. State DOT officials stated that aging bridges require more costly maintenance and repairs and many need to be replaced. See table 5 for examples of challenges cited related to aging bridge inventories. Other challenges were also cited by state DOT officials. See table 6 for examples of challenges that were less frequently stated. Bridge conditions have generally improved nationwide over the past decade. However the increase in the number and size of bridges that are approaching the limits of their design life will likely place a greater demand on bridge owners in the near future, making it more difficult to mitigate issues in a cost-effective manner. While FHWA collects information on bridge conditions annually and maintains data on federal obligations dedicated to bridges, it lacks performance measures demonstrating the link between bridge funding and changes in bridge conditions. This lack is in part because a limitation in the prior financial- tracking system, which did not allow the direct comparison of federal obligations with bridge projects’ outcomes. However, with recent improvements to FMIS, FHWA has the information needed to create an efficiency measure or measures to demonstrate the link between federal funding and the outcomes for bridges. This information can support Congress in making informed choices about how to best invest the limited available resources in maintaining or improving the condition of the nation’s bridges. We recommend that the Secretary of the Department of Transportation direct the FHWA Administrator to develop an efficiency measure or measures that demonstrate the linkage between the federal funding of bridges and the desired performance outcomes, such as maintained or improved bridge conditions, and report the resulting information to Congress. We provided a draft of this report to DOT for its review and comment. In written comments, reproduced in appendix III, DOT concurred with our recommendation. In addition, DOT provided technical comments that we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Transportation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. The names of GAO staff who made key contributions to this report are listed in appendix IV. This report addresses the funding and management of bridges and examines: (1) trends, over the past 10 years, in the condition of the nation’s bridges; (2) trends, over the same period, in federal funding of the nation’s bridges and how FHWA monitors the linkage between this funding and outcomes; and (3) changes since MAP-21 in how selected states fund and manage their bridge programs, including any challenges they face. To determine trends in the condition of the nation’s bridges, we reviewed and analyzed FHWA’s National Bridge Inventory (NBI) data from calendar years 2006 through 2015. We limited our review of NBI data to bridges that are located on public roads and that are at least 20 feet in length. We obtained NBI data for bridges during the selected calendar years for an aggregate of all records and by state, including all 50 states; District of Columbia (D.C.); and Puerto Rico. Specifically, we reviewed data by number of bridges and total deck area, looking at deficiency status and year of bridge construction, among other data. We calculated total deck area based on a formula using structure length and deck width—or in the case of culverts (structures with fill over them), approach roadway width— using NBI data. To determine trends in funding the nation’s bridges, we reviewed and analyzed federal obligations data on bridge projects in FHWA’s Fiscal Management Information System (FMIS) from fiscal years 2006 through 2015. Specifically, we obtained federal obligations data for bridge new construction, bridge replacement, bridge rehabilitation, bridge preventative maintenance, bridge protection, and bridge inspection and related training. We analyzed the data by improvement codes and by federal highway programs. In addition, we analyzed FHWA’s available data on state and local governments’ spending for bridge projects by reviewing data from the 2013 FHWA Conditions and Performance Report, reviewing FHWA’s Highway Statistics Series of reports, and interviewing FHWA officials. We assessed the reliability of the data that we used by reviewing documentation and interviewing officials on data verification and found the data to be reliable for our purposes. We also reviewed Office of Management and Budget (OMB) guidance and leading practices we have previously identified related to tracking, through performance measures, the linkage between funding and outcomes and compared current activities to this guidance and these leading practices. To determine how states fund and manage their bridge programs, including any challenges they face, we interviewed representatives from the American Association of State Highway and Transportation Officials and the National Association of County Engineers. We also interviewed state officials from 24 states and D.C. We selected this non-generalizable sample of states because they have large bridge inventories, relatively high levels of federal surface transportation funding, and for geographic dispersion. The selected states were: California, Connecticut, Florida, Hawaii, Illinois, Iowa, Kansas, Louisiana, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Nebraska, New York, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Dakota, Texas, Vermont, and Washington. From the selected states, we further selected 5 states for site visits, based on selection criteria similar to that stated above, in order to obtain additional information from each state. We selected California, Oklahoma, Rhode Island, Texas, and Washington for site visits. In these states, we met with state transportation officials, FHWA Division Office officials, and officials from two local-government transportation agencies from each state. We selected the non- generalizable sample of local agencies based on recommendations from state officials of nearby local agencies that could accommodate our site visit schedule. The selected local agencies were: Los Angeles County (California); Placer County (California); City of Oklahoma City; Oklahoma Cooperative Circuit Engineering District #7; City of Providence (Rhode Island); Rhode Island Turnpike and Bridge Authority; City of Austin (Texas); Williamson County (Texas); King County (Washington); and City of Seattle (Washington). In addition to the individual named above, Heather MacLeod, Assistant Director, Jessica Bryant-Bertail, Brian Chung, Danielle Ellingston, Dave Hooper, Ying Long, SaraAnn Moessbauer, Josh Ormond, and Amy Rosewarne made key contributions to this report.
The nation's 612,000 bridges are critical elements of the surface transportation system, but the entire system is under growing strain and funding it is on GAO's High Risk List. While state and local governments own and maintain most of the nation's bridges, the federal government provides some funding for them, administered by FHWA. In 2012, legislative changes consolidated the bridge-funding program into other highway programs, giving states more flexibility in how to allocate funds. GAO was asked to review the funding and management of bridges. This report examines trends, over the past 10 years, in (1) the condition and (2) the funding of the nation's bridges, as well as (3) how states fund and manage their bridge programs, given the 2012 legislative changes. GAO analyzed FHWA's bridge conditions and funding data; reviewed applicable laws, relevant FHWA program guidance, and federal guidance on performance measures; and interviewed federal officials and transportation officials from 24 states and D.C., selected to include those with large bridge inventories, among other factors. Bridge conditions have generally improved nationwide from 2006 to 2015, based on GAO analysis of federal bridge data. For example, the percentage of structurally deficient bridge deck area (the surface area that carries vehicles) decreased from 9 percent to 7 percent nationwide during this period. The number of structurally deficient bridges also decreased from 13 percent to 10 percent nationwide. However, some states have substantially higher percentages of structurally deficient deck area than others. Bridge conditions may become more challenging to address as bridges age, because the number of bridges and amount of total deck area increased dramatically from the 1950s through the 1970s, generally with a 50-year design life. Analysis of federal bridge data shows that the amount of structurally deficient deck area is greatest for bridges built from 1960 through 1974, indicating an expected need for additional maintenance, replacement, or rehabilitation. Federal funds obligated for bridge projects have remained relatively stable from 2006 to 2015, between $6 billion and $7 billion annually in most years. During this period, the use of federal funds on bridges shifted somewhat from building new bridges to projects that preserve existing bridges, such as bridge rehabilitation or preventative maintenance. While the Federal Highway Administration (FHWA) estimates total funds dedicated to bridges and collects data on bridge conditions nationwide, it does not track the linkage between federal funds and changes in bridge conditions. GAO has previously reported that linking performance outcomes with resources invested can help agencies to more clearly determine how changes in invested resources may result in changes to performance. Using such performance measures would help FHWA demonstrate the link between federal funding and outcomes for bridges. Officials from the selected 24 states and the District of Columbia (D.C.) reported little change in the way they have funded and managed bridges since 2012. Officials from 21 states and D.C. reported bridge funding has been stable since the federal bridge program was consolidated in 2012. Officials from 3 states reported an increase in bridge funding since that time. The general stability in bridge funding may be a result of the long time frame for planning bridge projects; for example, bridge funding cycles can be 5 years or longer, a time span that means any changes would not be apparent for several years. Officials from 10 states mentioned increased flexibility in their ability to use federal funds for bridge projects. Changes from the Moving Ahead for Progress in the 21st Century Act provided states flexibility to determine whether to spend federal highway funds on bridges or other highway needs. Further, officials from 18 states and D.C. reported that they have not changed how they prioritize bridge projects relative to other transportation projects. With respect to challenges, officials from 14 states described inadequate funding as a challenge, and officials from 13 states reported aging bridges as a challenge. For many of these states, the challenge of maintaining aging bridges is intertwined with the challenge of inadequate funds. GAO recommends that DOT direct FHWA to develop measures on the linkage between the federal funding of bridges and the desired outcomes—maintained or improved bridge conditions—and report results to Congress. DOT concurred with our recommendation. DOT also provided technical comments, which we incorporated, as appropriate.
The Forest Service’s reforestation and timber stand improvement program shapes our national forests as well as their associated plant and animal communities through treatments that establish, develop, and care for trees over their lifetime. Under NFMA, each national forest is required to have a forest management plan describing the agency’s objectives for the forest, including those related to reforestation and timber stand improvement. To achieve these management objectives after a timber harvest or natural event that damages forests, Forest Service staff identify areas needing reforestation and visit forest locations to plan a specific sequence of treatments needed, known as a prescription. The prescription directs how many young trees must be reestablished and the proper mix of vegetation necessary to achieve specific objectives in the forest plan, such as maintaining wildlife habitat. Reforestation prescriptions may call for planting or natural regeneration, as outlined in table 1. To plant a site, Forest Service staff order seedlings from a nursery up to 3 years in advance of planting to allow enough time for them to grow, then plant the seedlings when conditions are favorable. For natural regeneration, agency staff allow seeds from trees left on the site or nearby trees to germinate and grow, which sometimes requires removing unwanted vegetation and surface debris to improve the likelihood that the trees will survive or accelerate their growth. As with reforestation, Forest Service staff identify areas of a forest needing timber stand improvement and prepare prescriptions. Timber stand improvement prescriptions are intended to improve growing conditions for trees in a stand and typically call for treatments such as release or thinning, as outlined in table 1. To conduct a release treatment, Forest Service staff remove competing vegetation to allow seedlings to grow; and to thin a stand, agency staff remove some trees to accelerate the growth of the remaining trees or to improve forest health. Reforestation and timber stand improvement treatments are funded by various sources, principally congressional appropriations and trust funds. Congressional appropriations that fund this work include moneys allocated from the National Forest System appropriation to the reforestation and timber stand improvement program as well as to other Forest Service programs whose primary purposes include improving forest health, decreasing hazardous fuels, and rehabilitating burned areas. In addition to these moneys, the Knutson-Vandenberg Trust fund that collects receipts generated from timber sales helps pay for reforestation and timber stand improvement in areas harvested for timber. While Knutson-Vandenberg funds are a dedicated source of funding for reforesting harvested lands, work in areas destroyed by natural causes, such as wildland fire, is generally funded through the National Forest System appropriation and a portion of the Reforestation Trust Fund. Reforestation Trust Fund receipts are generated by tariffs on imported wood products, and by law, moneys transferred into this fund for the Forest Service’s use are limited to $30 million each fiscal year. Other sources of funds, such as gifts, bequests, and partnerships, also fund reforestation and timber stand improvement treatments. The Forest Service’s implementation, management, and oversight of the reforestation and timber stand improvement program are decentralized. Forest Service headquarters and 9 regional offices establish policy and provide technical direction to 155 national forest offices on various aspects of the program. These national forest offices, in turn, provide general oversight to more than 600 district offices, several of which are located in each national forest. The district offices plan, fund, and manage reforestation and timber stand improvement projects, and the managers of these offices have considerable discretion in interpreting and applying the agency’s policies and selecting projects to fund. District office staff are responsible for assessing reforestation and timber stand improvement needs, developing prescriptions to address these needs, and accomplishing the work. Figure 1 shows a map of the Forest Service regions and highlights the regions we visited. The Forest Service’s four organizational levels—its headquarters, regional, national forest, and district offices—share responsibility for reporting reforestation and timber stand improvement needs to the Congress. Although the Director of Forest Management in its headquarters is responsible for the agency-wide reporting of reforestation and timber stand improvement needs, much of the responsibility for establishing standards and procedures for collecting and reporting these data has been delegated to the regional, national forest, and district offices. Forest and district offices use automated systems to record their reforestation and timber stand improvement needs and accomplishments and each region collects the data in one of nine regional databases and transmits its total reforestation and timber stand improvement needs to a centralized data repository. Nationally, the Forest Service consolidates the regional data to produce agency-wide reports of reforestation and timber stand improvement needs and accomplishments by national forest. These reports are submitted annually to the Congress. From fiscal years 1995 through 2004, the Forest Service reported to the Congress that the acreage of its lands needing reforestation initially declined and then increased during the last 5 years, with much of this increase occurring in regions in western states. During the 10-year period, the agency also reported that the acreage of its land needing timber stand improvement generally increased, though some regions reported slight decreases in these needs. These Forest Service data, when combined with other information, are sufficiently reliable to identify a general trend of increasing needs. Nonetheless, we have concerns about the usefulness of these data in quantifying the acreage of agency land needing reforestation and timber stand improvement. These concerns arise, in part, because the Forest Service’s regions and forests define their needs differently, and they do not always systematically update the data to reflect current forest conditions or review the accuracy of the data. Agency officials acknowledge these problems but said the agency focuses its efforts on undertaking reforestation and timber stand improvements and is less concerned about accurately collecting and reporting data on lands needing these treatments. Although the Forest Service is developing a new national data system, the agency does not anticipate making significant changes to improve the quality of the data. The Forest Service reports that the acreage of its lands needing reforestation declined steadily between fiscal years 1995 and 1999 but then increased from 2000 through 2004, as shown in figure 2. During this 10-year period, the primary source of the Forest Service’s reforestation needs changed. Specifically, the agency reports that its reforestation needs attributable to timber harvests decreased steadily, while needs associated with wildland fires and other natural disturbances were relatively stable until 2000, when such needs rose dramatically with the increase in wildland fires, particularly in western states. Reforestation needs reported by the Forest Service’s Northern Region—covering all of Montana and North Dakota and portions of some adjacent states—followed the national pattern most closely. In addition to the Northern Region, other regions we visited (Pacific Northwest, Pacific Southwest) spanning western states, such as Washington, Oregon, and California, reported large reforestation needs. These regions expressed concern about the increasing level of their reforestation needs relative to their future ability to meet these needs. With respect to timber stand improvement needs, the Forest Service reports that the acreage of its lands needing such treatments increased in most years following 1995, except for 1999, 2003, and 2004, when the reported needs declined slightly (as shown in fig. 3). The agency partially attributes the decline in needs during these years to an emphasis on thinning treatments and additional work associated with the National Fire Plan during 2003 and 2004. Officials at two of the four regions we visited, the Northern and Pacific Northwest Regions, told us they were concerned about the overall increasing level of their timber stand improvement needs. Timber stand improvement needs reported by the Forest Service’s Pacific Northwest region—covering all of Washington and Oregon—were the highest of any region during 4 of the last 5 years. According to officials in the Pacific Northwest region, timber stand improvement needs have accumulated, in part, due to placing a lower priority on such treatments than on reforestation and because many stands in which high-density tree planting practices were used to replace harvested trees during the early 1990s are now in need of thinning. While nationwide timber stand improvement needs generally have been increasing over time, some regions have reported stable or decreasing trends. For example, in the Southern Region, reported timber stand improvement needs have been relatively stable over the last 10 years, while the Pacific Southwest Region has reported slightly decreasing needs since 1995. According to officials in the Pacific Southwest Region, they have less need for timber stand improvement projects because they plant fewer trees as the result of reduced timber harvests. They have increased their ability to meet these needs by emphasizing projects that are eligible for funding under the National Fire Plan because they contribute to hazardous fuels reduction goals. The Forest Service data, when combined with other information from Forest Service officials and nongovernmental experts—as well as data on recent increases in natural disturbances such as wildland fire—are sufficiently reliable for identifying relative trend information. However, we have concerns about the use of these data in quantifying the acreage of Forest Service lands needing reforestation and timber stand improvement treatments because the reported data are inconsistent and insufficiently reliable for this purpose. These data are not sufficiently reliable because Forest Service regions define needs differently, influencing the volume of needs reported, and vary in their ability to link needs to forest locations, making it difficult to detect obsolete needs and update the data to reflect current on-the-ground conditions. Additionally, the data are a mixture of actual needs and estimates and may not be routinely reviewed for accuracy. As a result, the needs reported at the regional level cannot be meaningfully aggregated at the national level. Many of these data problems are long standing and may not be adequately addressed when the Forest Service implements a new data system. Without better data, Forest Service officials said, it is difficult to provide the Congress with estimates of the funding needed to prevent a backlog of reforestation and timber stand improvement needs. Additionally, agency officials said that given constrained resources and competing priorities they focus more on performing the treatments than accurately identifying and reporting reforestation and timber stand improvement needs. The Forest Service’s nine regions have independently developed their own data collection systems and do not all use the same definitions of need, influencing the volume of needs reported. As shown by the following examples from three of the four regions we visited, we found inconsistent criteria for assessing the need for reforestation or timber stand improvement between regions, among forests within regions, and over time. The Pacific Southwest Region reports a reforestation need in areas where it anticipates a timber harvest, even though the forest is still fully stocked with trees, while other regions we visited do not report a need until after timber is harvested and the last log has been removed from the sale area. In the Northern Region, forests share common definitions of need and do not report acres of burned land as needing reforestation if they plan to allow these areas to regenerate naturally without any site preparation. In the Pacific Northwest Region, however, because definitions of need vary from forest to forest, some report this condition as a need and some do not. Some forests in the Pacific Northwest Region define timber stand improvement needs as those projects they currently need, while other forests in this region include projects that will not be needed until a future time. Prior to 1996, the Northern Region reported, as timber stand improvement needs, only those projects that would be needed within 5 years. After 1996, however, the region expanded its definition to include all projects identified within the past 20 years. At the same time, the region redefined the methods for justifying a timber stand improvement need. According to Northern Region Forest Service officials, these changes largely were responsible for more than doubling the timber stand improvement needs reported by this region from 1995 to 1996. Forest Service regions and national forests within regions vary in the quality of the source data they collect and report. Specifically, some regions are able to link reported needs to distinct forest locations, while others cannot. In the Northern Region, for example, all forests use a common reporting system that links reforestation and timber stand improvement needs to particular stands of trees by their mapped locations. Officials in the Pacific Northwest Region, however, indicated they had difficulty linking reported needs to specific geographic locations because national forests within their regions use different, independently developed reporting systems. Like the Pacific Southwest and Southern Regions, these officials indicated that they do not always include information describing the locations of reported needs. In the Pacific Southwest Region, for example, a regional official told us that some districts link needs to “dummy stands,” or records that do not include information about where a need for treatment is geographically located. He noted that this practice speeds data entry but impairs data quality. Officials we interviewed throughout the Forest Service also acknowledge that the data include some obsolete needs and exclude some actual needs, in part because not knowing the location of all reported needs prevents the detection and removal of obsolete or erroneous needs. Differences in Forest Service data among locations are compounded because the reforestation and timber stand improvement needs reported are a mixture of actual needs diagnosed through site visits and estimates, due in part to agency guidance and variations in regional reporting practices. Although agency guidance generally requires that needs be diagnosed for a specific site and linked to a prescription for treatment, it also directs staff to estimate reforestation needs following a wildland fire or other natural disturbance and revise these estimates within the year. We found in our visits to four regions that they vary in the extent to which they report needs based on a site-specific diagnosis or an estimate, and consequently may understate or overstate needs. Forest Service guidance sets different standards for reporting reforestation needs that arise from timber harvest rather than those created by fires or other natural disturbances, in part, to promote timely reporting. For example, after a clear-cut harvest, the guidance directs regions to determine reforestation needs using a site-specific diagnosis and prescription for regenerating the acreage. In contrast, after fires or other natural disturbances, this guidance encourages staff to immediately estimate the acres in need of reforestation before they have visited forest locations to develop a site-specific prescription and refine their estimate while performing restoration activities. Forest Service officials commented that at times it is difficult to balance the timely reporting of needs created by natural disturbances with data accuracy. Regions we visited varied in the extent to which they used site-specific prescriptions or estimates as a basis for reporting needs. For example, although a Forest Service official in the Southern Region told us that over 100,000 acres of land there may need reforestation, in part due to insect damage, he said none of this acreage will be reported as needing reforestation until staff diagnose the needs through site visits and prescribe treatments. In contrast, forests in wildland fire-prone regions, such as the Pacific Southwest Region, report needs based on gross estimates after natural disturbances. In cases where reforestation or timber stand improvement needs are based on gross estimates, the reported needs may not always be adjusted after the actual needs are known, according to Forest Service officials. For example, an official from the Pacific Southwest Region indicated that the moist climate in some areas of the region causes vegetation to grow quickly, so that when an area initially needs to be reforested, staff generously estimate all possible treatments needed to remove unwanted vegetation and are unlikely to update these reforestation needs, even if subsequent treatments are deemed unnecessary. On the other hand, this official indicated that staff are likely to understate the need to thin trees in some areas because they do not expect sufficient funding to address all of the timber stand improvement needs. They therefore concentrate their efforts on meeting the needs rather than diagnosing and precisely reporting them. Officials in other regions also noted that they emphasize addressing needs rather than accurately identifying and reporting them, in part because incentives are focused on accomplishments and meeting treatment goals established by headquarters. The Forest Service cannot attest that the reported data on needs reflect actual forest conditions nationwide because the data are not reviewed for accuracy and when errors are found they are not always corrected. Forest Service officials at headquarters and in the regions we visited told us that data may be overstated or understated because, with the exception of the Northern Region, they have not conducted comprehensive reviews of data accuracy in recent years and because controls over data are decentralized. Some regions do not consistently update or review their data for substantive errors before reporting them. Although Forest Service headquarters staff conduct high-level checks to ensure that some data are reported consistently, they have not conducted reviews in the last decade to ensure that the data reflect on-the-ground conditions. Consequently, an official in the Pacific Southwest Region speculated that there is an error rate of approximately 20 percent in the reforestation and timber stand improvement needs reported within the region. Even when errors are detected, there is no assurance that data will be corrected. For example, according to an official in the Pacific Northwest Region, an error of 10,000 acres dating from 2002 remains uncorrected. We also found during our visit to this region that another error in reporting reforestation needs in 2002, compounded by an attempt to correct the error, resulted in the erroneous reporting of more than 6,000 acres of reforestation needs in one district. The problems we identified with the Forest Service’s data on reported needs are not new. In 1985, a congressional study of the Forest Service’s reforestation and timber stand improvement program found that numbers used to report both the reforestation and timber stand improvement backlogs were unreliable because backlogged needs were not linked to specific forest locations and because data at different organizational levels could not be reconciled. This study attributed these shortcomings to a lack of centralized program management to standardize definitions of need and establish consistent reporting criteria. Subsequent reviews of the program, including a GAO review in 1991, found similar problems and recommended additional standardization. The Forest Service recognizes these problems and has acknowledged it has not provided the Congress estimates on funding needed to prevent a backlog, in part, because needs data are a mixture of actual needs, estimates, and obsolete needs. Instead, the Forest Service provides the Congress with a proposed program of work, outlining the amount of reforestation and timber stand improvement needs it will address within certain budget limits. In an attempt to improve its data and integrate its reporting between regions and headquarters, the Forest Service is introducing a new agency- wide system for collecting and reporting data on reforestation and timber stand improvement needs. The Forest Service intends to implement the new system by the end of fiscal year 2005. When the new system replaces individual district, forest, and regional systems for reporting needs with a single, agency-wide database, it will standardize how reforestation and timber stand improvement activities are tracked as well as modernize data entry, system maintenance, and security activities. However, the agency acknowledges these changes will not, in and of themselves, address the data reliability issues that we have identified since the Forest Service intends to transfer regional data from the current systems to the new system without altering how reforestation and timber stand improvement needs are defined, interpreted, and reported from the initial needs assessment onward. Since this system does not introduce any new procedures to standardize how needs are defined or to check for and correct errors, the consistency and accuracy of the data will still be determined at the local level. Forest Service officials told us they do not anticipate making significant changes to current agency policies and practices that make regions individually responsible for developing data collection and reporting standards and ensuring that data are accurate. Therefore, it is likely that present data deficiencies will persist in the new system if existing data are incorporated into it without additional efforts being made to improve the data. Officials acknowledge that improving the data will require a significant investment of resources and also acknowledge that unless the work is done, data reliability issues will persist. Natural disturbances, such as wildland fires or insect infestations, and management decisions are the major factors contributing to the recent increase in reforestation and timber stand improvement needs, according to Forest Service officials. The officials said that reforestation needs are accumulating primarily because a recent increase in natural disturbances has created more needs, and funding to pay for such needs is limited. Other factors, such as reforestation failures, also have contributed to increasing reforestation needs, according to agency officials. Timber stand improvement needs have accumulated, in part, because some regions do not emphasize these projects and consequently, treatments have not kept pace with growing needs. At the same time, agency officials have been identifying more timber stand improvement needs as they have expanded the scope of work included in the program. In addition, timber stand improvement needs have been increasing because, in the 1980s and 1990s, the Forest Service used reforestation techniques that favored planting trees densely, creating stands that now need thinning. Forest Service officials told us that reforestation needs have been rising largely because such needs have increasingly been generated by causes other than timber harvests, and funding to address these needs has not kept pace. During the early 1990s, the agency shifted its management emphasis from timber production to enhancing forest ecosystem health and, as a result, began harvesting less timber. With the reduction in harvests, revenue from timber sales decreased. As shown in figure 4, nearly 4 billion board feet of timber were harvested from Forest Service lands in 1995, whereas about 2 billion board feet were harvested in 2004. Similarly, according to the Forest Service, the timber harvested on its lands in 1995 was worth about $616 million, whereas timber harvested in 2004 was worth about $217 million. As timber harvests and revenue have decreased, related reforestation needs also have decreased, and so the Forest Service has generally been able to meet these needs by using timber sale revenue to help pay for reforestation. Forest Service officials also noted that the value of the wood they are now selling is typically much lower than it was a decade ago. According to Forest Service reports, as timber harvests and related reforestation needs were decreasing, the acreage burned in wildland fires and damaged by insects and diseases annually began to increase significantly around 2000, leaving thousands of acres needing reforestation. Nationally, wildland fires burned over 8 million acres in 2000, compared with less than 6 million acres in 1999 and about 2.3 million acres in 1998. In 2002, Colorado, Arizona, and Oregon recorded their largest fires in the last century. Similarly, figure 5 shows that the amount of land damaged by insects and diseases has increased significantly, with over 12 million acres of forest affected in 2003, compared with less than 2 million acres in 1999. As the acreage affected by these natural disturbances increased, so did reforestation needs. However, funding allocated to pay for reforestation did not increase at the same rate, so needs began to accumulate. While reported reforestation needs have been rising, funding allocated for reforestation and timber stand improvement has been relatively constant (as shown in fig. 6). In addition, pressure on limited funding was magnified in fiscal year 2001, as the Forest Service combined under one budget multiple programs including reforestation and timber stand improvement as well as range, watershed improvement, and noxious weed management programs, among others. Once these programs were combined, agency officials had to balance reforestation and timber stand improvement needs against priorities in the other programs. On a broader scale, a Forest Service official said they must balance reforestation needs against other competing priorities when requesting a budget from the Congress, so they did not request more funding to help pay for reforestation needs during the last decade. Officials did, however, request additional funding for fiscal year 2006, according to an agency official. In addition to natural causes, several other factors have contributed to the reported increase in reforestation needs, according to Forest Service officials. In some areas, reforestation attempts have failed, creating needs where agency officials will try again to reforest the same lands. Reforestation efforts can fail for a variety of reasons, such as insufficient moisture, improper planting techniques, or animal damage to young seedlings. Ongoing drought conditions in the West, as well as the retirement of experienced foresters, may have played a role in recent reforestation failures, according to Forest Service officials. Another factor that has contributed to the reported increase in reforestation needs is that some national forests have recently acquired lands through purchase or exchange that need reforestation. For example, the Ozark-St. Francis National Forest in Arkansas acquired about 11,000 acres of land in 1993 and 1994 that had been harvested, and much of it needed reforestation. About 4,000 acres of the land have yet to be reforested. Nationally, timber stand improvement needs have generally been increasing for the 10-year period we reviewed because (1) some Forest Service regions emphasize reforestation over timber stand improvement; (2) agency officials have identified increasingly more needs as they have expanded the scope of timber stand improvement to include work needed to meet a wider range of objectives; and (3) past forestry practices called for dense planting, leaving a legacy of thinning needs to be addressed in the timber stand improvement program, particularly on forests that had large reforestation programs within the past 2 decades. While these circumstances have contributed to nationwide increases in timber stand improvement needs, they have not always led to increases in individual regions. According to Forest Service officials, one reason nationwide timber stand improvement needs are accumulating is that some regions prioritize funding for reforestation treatments over timber stand improvement treatments. These regions do so in part because they are required to complete reforestation treatments within 5 years of harvesting, whereas for timber stand improvement, there is no such requirement. In addition, agency officials said that, generally, lands needing reforestation change more quickly than lands needing timber stand improvement, so the opportunity cost of deferring reforestation treatments is higher than that of deferring timber stand improvement projects. For example, an official in the Pacific Southwest Region estimated that if they did not reforest an area immediately after a fire, brush would likely become established within a few years, and removing the brush could add as much as $400 per acre to the costs of reforestation. In contrast, deferring a thinning treatment for 1 or 2 years has little effect on forest conditions and treatment requirements, agency officials said, although deferring these projects for longer periods can create problems, as discussed later. Another reason national timber stand improvement needs are increasing is that the Forest Service has expanded the scope of the program, now identifying lands where timber stand improvement work is needed to meet objectives beyond maximizing timber yield, such as improving wildlife habitats or thinning hazardous fuels to reduce fire danger. As the objectives of timber stand improvement have expanded, needs have expanded accordingly. For example, the Southwestern region has identified fuels reduction as a regional priority and consequently dedicates most of its reforestation and timber stand improvement program funding to timber stand improvement, using only moneys from the Reforestation Trust Fund—about 4 percent of the region’s 2003 program funds—to pay for reforestation projects. However, the region’s increased emphasis on fuels reduction has added to timber stand improvement needs rather than reducing them, because as the scope of timber stand improvement expands to include lands that need fuels reduction, officials are identifying many more needs than they can meet each year. In addition, nationwide timber stand improvement needs are increasing because reforestation techniques favored in the 1980s and 1990s recommended planting trees much more densely than may be currently recommended. Consequently, many stands that were planted 15 to 20 years ago now need thinning, according to agency officials. For example, during the 1970s, 1980s, and early 1990s, the Idaho-Panhandle National Forest had an active timber production program, clear-cutting and harvesting thousands of acres each year, and replanting densely. During that period, officials deliberately planted seedlings densely so that as the trees grew, they could keep the largest and healthiest of them for cultivating, and thin out the others. Although the Forest Service has now reduced its emphasis on timber production, thinning is still needed in these areas to maintain forest health, according to agency officials. The circumstances causing the nationwide trend of increasing timber stand improvement needs have not always led to increases in individual regions. For example, the Pacific Southwest region has reported decreasing needs since 1994. According to agency officials, the decrease is largely a result of the decrease in timber harvests and associated planting. In some parts of the country, such as Idaho, timber stand improvement projects may not be needed until 20 or 30 years after planting. However, the moist climate in some areas of the Pacific Southwest region causes vegetation to grow quickly, so timber stand improvement projects are typically needed much sooner—between 2 and 10 years after planting. Consequently, many of the region’s harvest-related timber stand improvement needs have already been addressed and total needs have been decreasing. In addition, like the Southwestern region, the Pacific Southwest region has begun to give priority to timber stand improvement projects that contribute to fuels reduction goals. According to agency officials in the region, this emphasis has helped finance timber stand improvement work and reduce needs. In the Southern region, agency officials reported that timber stand improvement needs have been relatively stable during the period we reviewed, in part because the timber program in that region is still active, and timber revenues can help pay for timber stand improvement needs. If reforestation and timber stand improvement needs continue to accumulate in the future, the Forest Service will likely have to postpone some projects. According to agency officials, the agency’s ability to achieve forest management objectives may consequently be impaired; treatment costs could increase; and forests could become more susceptible to fire, disease, and insect damage. While Forest Service officials expressed concern about the potential harmful effects of delaying projects, the agency has not clarified priorities for the reforestation and timber stand improvement program that reflect this concern and the current context in which the program operates. Instead, regions and forests rely mainly on decision-making practices initiated when the agency’s primary focus was timber production, and timber revenues allowed them to fund reforestation and timber stand improvement needs with fewer constraints. Forest Service headquarters officials acknowledged this circumstance and noted that field staff could benefit from clarified, updated national policy. The Forest Service’s ability to meet the management objectives defined in its forest plans—such as maintaining a variety of tree species in a forest or appropriate habitat for certain wildlife—could be impaired if reforestation or timber stand improvement treatments are delayed. For example, at the Bitterroot National Forest in Montana and Idaho, agency officials have identified a management objective of establishing or maintaining ponderosa pine forests, which populated the area historically and are well- adapted to high-frequency, low-intensity wildland fires. Currently, the Bitterroot National Forest has thousands of acres that need reforestation because of wildland fires in 2000. If these needs are left unattended, douglas fir forests will likely become established instead of ponderosa pine; and, according to agency officials, douglas fir tends to grow into crowded stands that officials believe will perpetuate the cycle of dense forests, fueling severe fires. In addition, agency officials prefer ponderosa pine forests because they provide habitat for certain wildlife species, such as pileated woodpeckers. In other cases, an area previously dominated by forests could become dominated by shrubfields, compromising wildlife habitat, recreation, and timber value. In the Shasta-Trinity National Forest, an area that was cleared by logging and wildland fires at the turn of the century left a brushfield that persisted for over 60 years and only became forested when the Forest Service actively planted the area. Similarly, about 750 acres in the Tahoe National Forest were cleared by a 1924 wildland fire and replaced by shrubs (shown in fig. 7) that remained until agency officials replanted the area in 1964—40 years later. One Forest Service official expressed particular concern about leaving reforestation needs unattended because, as these needs are increasingly created by natural causes such as wildland fires that burn vast areas, adverse effects have the potential to occur on a large scale. Furthermore, an agency official said that if they cannot meet the management objectives defined in their forest management plans, it will be difficult to fulfill their mission “to sustain the health, diversity, and productivity of the nation’s forests.” Similarly, if timber stand improvement needs are not addressed, it also will be difficult to meet forest management objectives. For example, if competing vegetation is not removed, the success of recently completed reforestation treatments can be jeopardized, hindering agency efforts to meet objectives such as maintaining an area in a forested condition or reintroducing certain species of trees. If thinning needs are left unattended, forest management objectives can be thwarted as well. For example, some forests have identified areas where timber production is an objective, and thinning treatments are used to increase timber productivity by removing trees with the least potential for growth and leaving those with the greatest potential. When these treatments are delayed, trees grow more slowly and may not reach the desired size, slowing progress in meeting timber production objectives. If reforestation and timber stand improvement needs are not addressed in a timely manner, treatment costs also could increase because removing vegetation, which is required for most reforestation and timber stand improvement projects, will become more costly as the vegetation grows. For example, at the Ozark-St. Francis National Forest in Arkansas, insects have destroyed thousands of acres of red oak forests since 1999, leaving large areas that need to be reforested. As the Forest Service has left these areas unattended, brush that must be removed before new seedlings are planted is becoming established, and removing it will be more costly as time passes. When the brush was young and small, it could have been removed with inexpensive methods such as hand spraying herbicides; but now it will require a more expensive method such as cutting the brush with a chainsaw, according to agency officials. If these areas are left indefinitely, trees may become established, but a different mix of species will probably replace the red oak forests, which are desirable both for their commercial value and the habitat they provide for wildlife, such as large game. In addition, some Forest Service officials said that because there has been recent controversy over salvage timber sales—the selling of dead or dying trees—the sales have been delayed, adding costs to reforestation projects done following salvage sales. The Forest Service could not, however, quantify such costs. Although salvage sales do not always precede reforestation, any salvage harvesting that is done is generally completed before reforesting begins because logging activities and equipment can damage young seedlings. Consequently, when salvage sales are delayed, reforestation projects are delayed as well, causing reforestation costs to increase as vegetation grows that must be removed before reforesting. Also, when salvage sales are delayed, revenue declines because over time the value of the salvage timber decreases as the wood decays. According to agency officials, revenue from salvage sales was once enough to cover administrative costs of the sale and also help pay for reforestation in some cases, but now it is not typically enough to pay for any reforestation. However, data are not readily available to show how common it is for salvage sales to delay reforestation projects or the extent to which revenues for salvage timber have declined, and why. If reforestation and timber stand improvement needs are not addressed, forests will be more susceptible to severe wildland fires and damage from insects and disease, according to agency officials. When reforestation needs are left unattended, brush can grow in place of forests, providing dense, continuous fuel for wildland fires. Alternatively, exotic plant species may become established, some of which are more susceptible to wildland fires than native species. Once such invasive species become established, it is difficult to eradicate them. In addition, wildland fires may weaken some trees without killing them, leaving them susceptible to insect attack and diseases; and if reforestation needs are left unattended, an insect infestation can grow to epidemic proportions. In contrast, when the Forest Service reforests such an area, agency officials typically will first remove infested trees, which can serve as carriers for insects and disease, and then plant healthy seedlings that are more resistant. Leaving timber stand improvement needs unattended also can increase forest susceptibility to wildland fire, insects, and disease. Forests that are densely populated and need thinning tend to be stressed because the trees compete with one another for sunlight, water, and nutrients. Experts believe that when wildland fires start in such forests, they are fueled by the tightly spaced trees, causing the fires to spread rapidly and increasing the likelihood of unusually large fires, resulting in widespread destruction. Similarly, when insects or diseases infect such forests—especially when the trees are of a uniform species and age rather than a variety of species and ages—they can spread rapidly because of the stressed condition of the trees and because the trees are close together and of the same species. Although Forest Service officials expressed concern about the potential effects of leaving reforestation and timber stand improvement needs unattended, the agency has not made sufficient adjustments to address these concerns and adapt to changes in the context in which the program operates. The Forest Service has shifted its management emphasis from timber production to ecosystem management, sources of reforestation needs have shifted from timber harvests to natural causes, and budgets have become increasingly constrained. However, the agency has not adjusted the program’s direction, policies, practices, and priorities in keeping with these changes, although agency officials acknowledged the need to do so. Until they do, it will be difficult to ensure that reforestation and timber stand improvement funds are targeted toward activities that will have the greatest impact in mitigating potential adverse effects. While the Forest Service formally shifted its management emphasis from timber production to ecosystem management in the early 1990s, there remains a general lack of clarity about agency mission and goals, and more specifically, a lack of clarity about the direction and goals for the reforestation and timber stand improvement program, according to agency officials. When timber production was the emphasis, the direction for the reforestation and timber stand improvement program was clearly focused around maximizing timber production, whereas in the current environment, it is less clear. Reforestation and timber stand improvement projects now are done for multiple purposes—such as improving wildlife habitat, protecting streams and water quality, and reducing susceptibility to wildland fires—but it is unclear which of these purposes are more important, if any, and how to allocate limited funds to support such diverse purposes. The lack of clarity is apparent in forest management plans, where management objectives are expressed in language that may be vague or contradictory, according to agency officials. For example, one objective in a Montana forest’s management plan calls for providing “a pleasing and healthy environment, including clean air, clean water, and diverse ecosystems.” The forest management plans are intended to help guide management decisions, such as deciding which reforestation and timber stand improvement techniques to use, but agency officials said it can be difficult to interpret the plans when making such decisions because of the vague language, conflicting management objectives, or a combination of these factors. A 2004 study in the Pacific Southwest Region found that many agency officials believe forest management plans are too generic and lack clear priorities. In the absence of program direction that is consistent with the current management emphasis, reforestation and timber stand improvement policies remain in place that reflect outdated direction and management emphasis. For example, some reforestation policies written in the 1980s call for tight spacing between trees consistent with the agency’s timber focus at the time. Dense planting can increase timber production and decrease competing vegetation, but it is more expensive than sparser planting and can add costs later because dense stands need to be thinned. Agency officials acknowledged that in many cases, these standards are outdated and reflect neither the current emphasis on ecosystem management, nor the current environment of constrained budgets. Nevertheless, officials explained that they have not changed the standards because they are not required to comply with them. Rather, they have the discretion to determine the appropriate spacing for trees on a site-specific basis and to write a prescription that deviates from the standards by relying on their professional judgment. While reliance on professional judgment may result in actions that are more closely aligned with the current management emphasis, there is no assurance that it will have such results without clear direction and policies consistent with the direction. In some places, regional culture that reflects a former management emphasis and budgetary situation influences current practices. For example, when reforesting an area, officials in the Pacific Southwest region almost always rely on planting—a more expensive method than natural regeneration—because they have always done so and, according to agency officials, this practice has been reinforced by the regional culture. When the agency-wide management emphasis was timber production, reforestation standards called for prompt reforestation and tightly spaced trees to maximize timber volume; so officials rarely relied on natural regeneration, which does not necessarily ensure rapid reforestation or result in tightly spaced trees. In addition, when timber revenues were higher and reforestation efforts centered on harvested areas, the region could always afford to plant. Now, as the agency’s management emphasis has shifted to ecosystem and forest health, and as budgets have become increasingly strained, officials in the Pacific Southwest region said they are beginning to encourage greater reliance on natural regeneration, but it remains to be seen whether forests and districts will adjust their practices, accordingly. Priorities for the reforestation and timber stand improvement program also reflect a lack of clarity about program direction in the context of the current management emphasis, and a continued reliance on former program direction. For example, among agency officials we talked with, there was disagreement on how funding should be allocated between reforestation and timber stand improvement work and on whether one ought to be higher priority than the other. In the Pacific Northwest region, agency officials wrote a 2001 report recommending that the region divert some of its reforestation funds to pay for additional timber stand improvement. The report stated that doing so is justified, because (1) many of the current timber stand improvement needs resulted from reforestation projects several decades ago that favored high density planting and (2) without thinning to help reduce the impacts of wildland fire, reforestation will continue to be needed after wildland fires. Nevertheless, regional officials we talked with did not all agree with the recommendation, and the region has not implemented it. Instead, the region has continued to prioritize reforestation over timber stand improvement, as it has done since the inception of the timber program. According to one regional official, the Forest Service’s history of timber production permeates current thinking, and many procedures do not reflect the current management emphasis on ecosystem health. Without clear program direction, not only is it difficult to determine priorities between reforestation and timber stand improvement, but it is also difficult to do so for work within each. For the most part, the regions and forests we visited have not established clear criteria for prioritizing funding decisions, and officials do not always agree with one another about such decisions. For example, at a forest in the Pacific Southwest region, after district officials replanted most of an area burned by a 1996 wildland fire, regional officials thought replanting the remaining burned area was a low priority because of the high per-acre cost. District and forest-level staff, however, believed it was a high priority because the area was harvested in a salvage sale after the fire, and the Forest Service is required to reforest all harvested lands within 5 years. The forest has continued to fund projects to replant the remaining area. Without clear program direction that reflects the current management emphasis and budget environment, it is difficult to identify the highest priority investments to minimize the potential adverse effects of accumulating reforestation and timber stand improvement needs. The Forest Service needs a more accurate assessment of its reforestation and timber stand improvement needs to reflect the condition of our national forests. Although emphasizing data accuracy may take away from resources to carry out reforestation and timber stand improvements in the short-term, this investment is a critical foundation for providing a credible picture of these needs to Forest Service managers and the Congress. If the agency does not have accurate data, it cannot clearly define the extent or severity of its reforestation and timber stand improvement needs or effectively channel efforts and resources to meet the most important needs. Currently, the Forest Service has difficulty estimating how much it would cost to meet all of its reforestation and timber stand improvement needs because Forest Service data are inconsistent across regions and are not sufficiently reliable to accurately quantify needs. With the advent of a new agency-wide data collection system, the Forest Service has the opportunity to improve the accuracy of its data. However, the new system will only be as good as the data that are entered into it. The Forest Service should take this opportunity to address the data reliability problems by standardizing procedures, developing a common definition of need, and validating the data—verifying that reported needs accurately reflect conditions on the ground—so that it can build a well-founded budget case for funding reforestation and timber stand improvement needs. To seize this opportunity and minimize the potential adverse effects of unmet needs, it is important for the Forest Service to act soon. While it may not be possible for the agency to make all the necessary changes in time for its fiscal year 2006 appropriations request, it should aim to do so in time to support its fiscal year 2007 request. The Forest Service also must recognize, however, that in the current, fiscally constrained environment, even well-supported budget needs may not always be funded. The shift in management emphasis from timber production to ecosystem management, combined with constrained budgets and changing sources of reforestation needs, has changed the context in which the reforestation and timber stand improvement program operates. However, the Forest Service has not updated its goals and policies for the program to reflect this change. Until the agency does so, it will be difficult to establish criteria for prioritizing the use of reforestation and timber stand improvement funds. In the current budget environment, such criteria are crucial for identifying the best investments to minimize possible adverse effects so that the Forest Service can fulfill its stewardship responsibility and ensure the lasting health and productivity of our national forests. To enhance the ability of the Forest Service to identify its reforestation and timber stand improvement needs and ensure funding for its most critical projects, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following actions: standardize collection, reporting, and review procedures for data on reforestation and timber stand improvement needs by clarifying agency- wide guidance and developing a standard definition of need; require all regions to validate their reforestation and timber stand improvement data in time for congressional deliberation of the Forest Service’s fiscal year 2007 appropriations request; clarify the direction and policies for the reforestation and timber stand improvement program to be consistent with the agency's current emphasis on ecosystem management and appropriate for the current constrained budget environment, and require regions and forests to establish criteria for prioritizing the use of their reforestation and timber stand improvement funds in the current budget environment. We received written comments on a draft of this report from the Forest Service on behalf of Agriculture and from Interior. The Forest Service concurred with our findings and recommendations. Interior also concurred with our findings related to the Bureau of Land Management’s reforestation and growth enhancement program discussed in appendix I and provided a technical suggestion that we have incorporated into the report. The Forest Service’s and Interior’s letters are included in appendixes III and IV, respectively. As arranged with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested congressional committees. We also will send copies to the Secretaries of Agriculture and the Interior and the Chief of the Forest Service. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. The Bureau of Land Management (BLM) manages about 261 million acres of land nationwide, including about 55 million acres of forest and woodlands, which are administered under two management programs— one for about 2.4 million acres in western Oregon, and another for the remaining 53 million acres of public domain lands, located mostly in the West. BLM’s western Oregon lands include both lands managed primarily for timber and reserve forests, which are managed primarily to meet wildlife habitat and other objectives. The public domain lands consist mainly of woodlands, with some commercial forests. We confined our review of BLM to its western Oregon lands because the majority of BLM’s reforestation and related efforts are focused there and because BLM records for its public domain lands are not in a centralized, automated database. (For more information on the scope and methodology of our review, see app. II.) Regarding trends, BLM reports that it had backlogs of acres needing reforestation and growth enhancement treatments in western Oregon in 1993, but that such needs decreased until 2002 when the backlogs were eliminated. Since then, BLM reports that it has kept pace with these needs. According to BLM officials, the backlogs—defined by BLM as needs delayed 5 years or more—developed mainly because BLM was harvesting large volumes of timber, which created reforestation needs. The backlogs were eliminated through a combination of factors, including reduced harvest levels, increased funding, and management actions taken by BLM. Agency officials believe that because they are keeping pace with their current reforestation and growth enhancement needs, they are minimizing any potential adverse effects that could result from carrying a backlog of unattended needs. BLM is required to administer its western Oregon lands in accordance with the Oregon and California Grant Lands Act of 1937. The act called for permanent forest production and protection of watersheds, among other things, on BLM’s western Oregon lands. It also established an initial upper limit of 500 million board feet of timber that could be sold annually from these lands and directed BLM to adjust the limit, based on the capacity of the land. Accordingly, BLM has adjusted the limit several times—to 1,185 million board feet per year in 1983, 211 million board feet per year in 1995 with the advent of the Northwest Forest Plan, and 203 million board feet per year in 1999, where it remains today. To fund reforestation and growth enhancement work, BLM relies mainly on funds it has allocated for its reforestation and growth enhancement program—about $25 million in 2004. In addition, a small portion of such work is funded through other sources, such as appropriations allocated for wildland fire rehabilitation and the forest ecosystem health recovery fund. For the 10-year period between 1995 and 2004, BLM reports that its annual reforestation and growth enhancement needs on its western Oregon lands generally decreased until 2002, after which annual treatments kept pace with such needs, as shown in figure 8. A 1994 Interior Inspector General report found that at the end of fiscal year 1993, BLM had a backlog of over 50,000 acres of reforestation needs and over 220,000 acres of growth enhancement needs. According to a BLM official, after the backlogs were identified, needs generally decreased (for reasons noted in the following section) until both backlogs were eliminated in 2002. Since 2002, BLM has kept pace with its reforestation and growth enhancement needs on its western Oregon lands, agency officials said. BLM’s past backlogs developed primarily because timber harvests on its western Oregon lands had risen sharply, causing related reforestation and growth enhancement needs to increase, while funding allocated to address the needs decreased rather than increasing in step with the needs. Timber harvests on BLM’s western Oregon lands were at their peak in the late 1980s with over 1 billion board feet of timber sold annually; causing a spike in reforestation and related needs. However, unlike the Forest Service, BLM does not have the authority to use timber revenues from standard timber sales for reforestation and growth enhancement treatments. Instead, BLM relies on annual appropriations from the Congress to fund such treatments. According to the Inspector General’s report, BLM had backlogs in its reforestation and growth enhancement program because it did not request or receive sufficient funding through the budget process to eliminate these backlogs and because it used about $5.4 million of its forest program funds for overhead costs not related to forestry. In addition, large wildland fires in the late 1980s and early 1990s added to BLM’s growing reforestation needs, according to agency officials. Declining timber harvests, increased funding, and actions taken by BLM combined to help eliminate the reforestation and growth enhancement backlogs, according to agency officials. In the late 1980s and early 1990s, the volume of timber sold annually on BLM’s western Oregon lands decreased considerably—from a peak of 1,583 million board feet in 1986 to a low of 14 million board feet in 1994—and associated reforestation needs decreased in parallel. According to BLM officials, the declining timber harvests were largely a result of growing controversy surrounding timber harvests and the protection of endangered species on public land. Related litigation and judicial decisions hindered BLM from harvesting timber on its lands. The controversy was addressed in the Northwest Forest Plan, adopted in 1994, which reduced the portion of BLM’s western Oregon lands to be managed primarily for timber. After adoption of the plan, BLM reduced the upper limit on annual timber sales from these lands to 211 million board feet. At the same time, BLM modified its harvesting methods to rely less on clear-cutting and more on thinning. Unlike clear-cut forests, the thinned forests did not need to be reforested and required fewer growth enhancement treatments, resulting in a further reduction of needs. While reforestation needs were decreasing, BLM increased the funding it allocated for reforestation and growth enhancement from about $23 million in 1995 to about $26.5 million in 1996—an increase of about 15 percent. According to agency officials, increased funding in 1996 and subsequent years enabled BLM to treat more acres annually than it had done previously, thereby reducing the backlogs. In addition to declining timber harvests and increased funding, BLM took several actions to help reduce its reforestation and growth enhancement backlogs in response to the 1994 Inspector General’s report. First, officials in the reforestation and growth enhancement program instituted measures to improve their data collection and tracking so that they could accurately quantify the size of the backlogs, locate the source of the backlogs, and track progress in eliminating them. Second, BLM shifted its priorities, funding, and resources to target the areas where the need was greatest. BLM officials from all of the districts in western Oregon, as well as the state office, came together to agree on a list of priorities for the program, then targeted available funding and resources to the highest priority needs. For example, they decided to place a higher priority on maintaining existing timber stands than on planting new stands, because maintenance needs made up the greatest portion of the backlog. Adhering to the prioritization scheme helped address the backlog, according to an agency official, but required staff to have fluid roles. Finally, BLM officials analyzed treatment costs per acre in each district and identified best practices to optimize their investments of scarce resources. For example, one district identified cost- saving forestry techniques for thinning, while another identified lower-cost contracting procedures. BLM then standardized these practices across all western Oregon districts. Because BLM has been keeping pace with its reforestation and growth enhancement needs on its western Oregon lands since 2002, it is preventing any adverse effects that could result from a backlog of needs, according to agency officials. To examine the trends in federal lands needing reforestation and timber stand improvement, we reviewed the Forest Service and BLM programs because most of the nation’s reforestation and timber stand improvement activities are managed by these two agencies. We focused our work primarily on the Forest Service’s program because it is larger than BLM’s and its forests cover a broader cross-section of the country. During 2004, we visited the following four Forest Service regions and one national forest in each region: Northern, Pacific Northwest, Pacific Southwest, and Southern. These regions were selected because they had the highest reported reforestation or timber stand improvement needs for fiscal years 2000 to 2003. We obtained and analyzed 10 years of national data, from fiscal years 1995 through 2004, on the Forest Service’s reforestation and timber stand improvement needs and treatments from the agency’s Timber Activity Control System for Silvicultural Activities (TRACS-SILVA). We assessed the reliability of the data by examining the TRACS-SILVA system as well as the regional data systems of the four regions we visited, which provide the source data for the national TRACS-SILVA system. To understand what standards, procedures, and internal controls are in place for collecting, reporting, and verifying needs—and to assess the accuracy and completeness of the TRACS-SILVA data—we conducted structured interviews with headquarters, regional, and forest-level officials who enter data into the data systems, maintain the systems, and prepare reports using data from the systems. We performed basic electronic testing on some of the data and reviewed manuals and other documents describing the systems, such as flowcharts and data dictionaries. To obtain information about the new agency-wide data system, known as the Forest Service Activity Tracking System (FACTS), we interviewed agency officials involved in its implementation and reviewed information on the system’s data management functions, procedures, and applications. To corroborate the TRACS-SILVA data, we obtained information about trends in the Forest Service’s reforestation and timber stand improvement needs from additional sources. Specifically, we interviewed agency program officials and data experts in headquarters as well as in each regional and forest office that we visited to discuss the trends in reforestation and timber stand improvement needs, and we visited sites where reforestation and timber stand improvement treatments were needed. In addition, we reviewed agency reports and testimony written by foresters, budget officials, and researchers. We also reviewed nongovernmental studies and contacted outside experts to discuss these trends. Based on our review, we determined that the Forest Service data— when combined with other information we examined—are sufficiently reliable to identify general trend information, but we have concerns about whether these data accurately quantify the acreage of land needing reforestation and timber stand improvement. To identify the factors that have contributed to reforestation and timber stand improvement trends, we interviewed Forest Service officials in headquarters and the regional and national forest offices we visited. We also contacted an agency official in the Southwestern Region. We reviewed headquarters and regional reports on factors contributing to reforestation and timber stand improvement trends as well as reports from the Forest Service’s research station in the Rocky Mountain region and supplemented this information by interviewing researchers there. We obtained Forest Service data on timber harvests, wildland fires, and insect infestations during the last decade and conducted limited reliability assessments on these data. We also interviewed experts from nongovernmental organizations and reviewed publications from the organizations. To determine the potential effects of the Forest Service’s reforestation and timber stand improvement trends identified by the agency’s land managers, we interviewed agency officials (including ecologists and silviculturists) in headquarters, regional, and national forest offices. We visited the sites of ongoing and completed reforestation and timber stand improvement projects in four national forests and discussed the potential effects of delaying treatments with local Forest Service officials. We interviewed Forest Service research program officials as well as scientific and technical experts at Forest Service research stations in Arizona and Montana and at nongovernmental organizations. We also reviewed select governmental and nongovernmental publications, including scientific studies that discuss potential effects of delaying reforestation and timber stand improvement treatments and interviewed some of the authors. We limited our review of BLM to its western Oregon lands because they are central to the agency’s forest development program and because BLM does not systematically track reforestation data for its other lands. We obtained and analyzed 10 years of data, from 1995 through 2004, on the BLM’s reforestation and growth enhancement needs in western Oregon. We performed a limited reliability assessment of these data and BLM’s reporting system through discussions with BLM headquarters officials and a structured interview with officials at BLM’s state office in Portland, Oregon, which oversees BLM’s western Oregon lands. We supplemented these efforts by gathering other relevant documents and reports issued by the department of the Interior’s Inspector General. We determined that the BLM data were sufficiently reliable to use them descriptively in appendix 1 of this report. To determine the factors contributing to BLM’s reforestation and forest development trends and to identify potential effects of the trends identified by the agency’s land managers, we interviewed BLM officials in Oregon and reviewed relevant BLM and Inspector General reports. We conducted our work from June 2004 through March 2005 in accordance with generally accepted government auditing standards. Other individuals making key contributions to this report were Bill Bates, Christy Colburn, Sandy Davis, Sandra Edwards, Omari Norman, Cynthia Norris, and Jay Smale.
In 2004, the Forest Service reported to the Congress that it had a backlog of nearly 900,000 acres of land needing reforestation--the planting and natural regeneration of trees. Reforestation and subsequent timber stand improvement treatments, such as thinning trees and removing competing vegetation, are critical to restoring and improving the health of our national forests after timber harvests or natural disturbances such as wildland fires. GAO was asked to (1) examine the reported trends in federal lands needing reforestation and timber stand improvement, (2) identify the factors that have contributed to these trends, and (3) describe any potential effects of these trends that federal land managers have identified. The acreage of Forest Service lands needing reforestation and timber stand improvement generally has been increasing since 2000, according to Forest Service officials and data reported to the Congress, as well as other studies. While the Forest Service data are sufficiently reliable to identify this relative trend they are not sufficiently reliable to accurately quantify the agency's specific needs, establish priorities among treatments, or estimate a budget. The data's reliability is limited in part because some Forest Service regions and forests define their needs differently, and some do not systematically update the data to reflect current forest conditions or review the accuracy of the data. Forest Service officials acknowledge these problems, and the agency is implementing a new data system to better track its needs. While helpful, this action alone will not be sufficient to address the data problems GAO has identified. According to Forest Service officials, reforestation needs have been increasing in spite of declining timber harvests because of the growing acreage of lands affected by natural disturbances such as wildland fires, insect infestation, and diseases. In the past, reforestation needs resulted primarily from timber harvests, whose sales produced sufficient revenue to fund most reforestation needs. Now needs are resulting mainly from natural causes, and funding sources for such needs have remained relatively constant rather than rising in step with increasing needs. For timber stand improvement, the acreage needing attention is growing in part because high-density planting practices, used in the past to replace harvested trees, are creating needs for thinning treatments today and because treatments have not kept pace with the growing needs. Forest Service officials believe the agency's ability to achieve its forest management objectives may be impaired if future reforestation and timber stand improvement needs continue to outpace the agency's ability to meet these needs. For example, maintaining wildlife habitat--one forest management objective--could be hindered if brush grows to dominate an area formerly forested with tree species that provided forage, nesting, or other benefits to wildlife. Also, if treatments are delayed, costs could increase because competing vegetation--which must be removed to allow newly reforested stands to survive--grows larger over time and becomes more costly to remove. Further, without needed thinning treatments, agency officials said forests become dense, fueling wildland fires and creating competition among trees, leaving them stressed and vulnerable to insect attack. While agency officials expressed concern about these potential effects, the agency has not adjusted its policies and priorities for the reforestation and timber stand improvement program so that adverse effects can be minimized. Forest Service officials did, however, acknowledge the need to make such changes.
We developed an initial set of results-oriented agency budget practices by reviewing literature on performance management and budgeting and speaking to budget experts. Although some of the literature we reviewed focused on executive branch departments and agencies, much of it focused on the congressional budget process or on the budget processes of state and local governments and other countries. Where applicable, we adapted the information to describe how performance information could be used for budget formulation and implementation in federal agencies. We also drew on GAO reviews and other studies of agency budgeting. To supplement the literature we spoke to budget experts inside and outside of GAO about results-oriented agency budget practices. To test our initial findings from the literature and budget experts, we conducted a case study of budget formulation and implementation practices at the Small Business Administration. The case study helped us to refine the practices and begin to understand how the practices fit together as a framework. Finally, to obtain an operational perspective on the framework, we invited input from a panel of senior agency budget officials who commented on the importance of the practices for achieving agency goals and the challenges to implementing those practices. The panel consisted of seven senior budget officials from a judgmentally selected sample of federal agencies. The panelists represented one commission, two independent agencies, and four agencies from three departments. For the panel discussion, we provided the panelists with a questionnaire and asked them to rate each practice in terms of how important it is to an agency in achieving the agency’s goals and note if they thought the practice was difficult to carry out. We focused the panel discussion on areas where there was greater disagreement about the importance of a practice for helping to achieve agency goals or where the panelists believed a practice was difficult to carry out. We revised the framework to reflect the panelists’ operational perspective and described the practices in greater detail. The framework is organized into four themes that emphasize different dimensions of results-oriented agency budget practices. The first theme focuses on the budget process and asserts that performance should inform agency decisions during budget formulation and implementation. Themes 2 and 3 focus on an agency’s capacity to produce reliable budget estimates and to relate performance, budget, spending, and workforce information in a credible and useful manner. Theme 4 focuses on agency effectiveness and efficiency by emphasizing that an agency should continuously improve its programs and operations and seek approaches to maximize limited resources. Figure 1 depicts the framework for results-oriented agency budget practices. For each theme we lay out a series of agency practices—consisting of activities, processes, and capacities—that are intended to describe how an agency could better inform its budget decision-making and find ways to make better use of available resources to accomplish agency goals. We view the practices as desirable dimensions of budgeting that could be implemented in many different ways by agencies. The characteristics and circumstances that make organizations different from one another must be recognized when considering the applicability of the practices. In appendix I, we provide a more detailed description of the practices. Where relevant we have also noted specific laws, regulations, or other guidance that relate to the practice and apply to federal agencies generally. The framework does not reflect every aspect of the budget process. For example, there are other aspects of budget law and guidance, such as those related to fund control and accountability, that we treat as givens. Similarly, we assume that agencies will comply with appropriations and other laws and guidance and respond to OMB and department directions in formulating and implementing their budgets. The practices do not include those aspects of the budget process that are the primary responsibility of the department, such as coordinating the preparation of budget requests within the department. The framework is oriented toward agency rather than department budget practices because there is a closer connection between performance and the day-to-day management of resources at the agency level. However, since agency budgets are the building blocks of departmental budgets, some aspects of the framework may also apply at the department level. Finally, the framework is an attempt to describe the contribution that the budget function can make to an agency’s capacity to manage for results. It is not intended to be a comprehensive treatment of all the management functions that contribute to agency results. Clearly, the budget function should work in concert with program management and other management functions, such as human capital management, accounting, procurement, and information technology management to achieve agency goals. Overall the panel reacted positively to our efforts to develop a framework and generally agreed that the practices were important for achieving agency goals. In this section we list and briefly describe the practices for each of the themes. The panel also identified practices that were more difficult to implement and discussed the various challenges to implementation, which we summarize at the end of this section. The challenges are significant. Our subsequent work will focus on the progress agencies are making toward overcoming these challenges to better manage for results. The first theme focuses on the budget process and asserts that performance should inform agency resource decisions during budget formulation and implementation. Infusing performance information into budgetary deliberations may improve the agency’s ability to manage for results by increasing the likelihood that resource allocation decisions will reflect performance concerns. For example, performance information should be used to support claims for resources, to evaluate those claims, and to make decisions on tradeoffs between competing needs. For both budget formulation and implementation, Theme 1 practices emphasize communication and feedback between agency management and its program and other offices about the resources needed to achieve agency performance goals and objectives. During budget formulation, agency management should provide context in the form of general guidance to program managers on proposed agency goals, existing performance issues, and resource constraints. Theme 1: Performance Informs Budget Formulation and Implementation For budget formulation, agency management: provides general guidance to program officials on agency goals, performance issues, and resource requests input from program officials on the relative priority of new and existing programs and proposed changes to funding levels based on a review of changes in costs, performance issues, and other relevant factors; uses the input on relative priorities, changes in costs, performance issues, and other factors to weigh competing needs and decide funding levels for existing and new programs; communicates management’s decisions to program officials; and provides an opportunity to appeal the decisions; coordinates with other entities to achieve common goals and avoid duplication; justifies its budget request both within the agency and externally (e.g., with the department, the Congress, OMB) in terms of how requested funds will contribute to the accomplishment of agency informs its staff of departmental, OMB, and congressional actions on the budget request and obtains feedback from program officials on the implications of those actions for agency goals. For budget implementation, agency management: provides guidance to program officials on changes in agency goals, performance issues, and requests updated information from program officials on the relative priority of new and existing programs and proposed changes to funding levels based on a review of changes in costs, performance issues, and other relevant factors; uses the input on relative priorities, changes in costs, performance issues, and other factors to weigh competing needs and decide existing and new program funding levels; communicates its decisions about funding allocations; and provides an opportunity to appeal the decisions; allocates funding in a timely manner; routinely monitors performance, spending, and budgetary resources and adjusts allocations as necessary to maximize performance against goals; uses input from program officials on how changes in funding allocations will affect performance; coordinates program requests for postappropriations budget changes, requests input from program officials on the implications of those changes for agency goals, and communicates the results. Because program managers are in a position to understand the performance implications of different funding levels, management should obtain their input on desired funding levels based, in part, on how the funding addresses current and potential performance gaps and the relative priorities of their activities. Agency management should then use this information to evaluate competing needs and to determine funding levels to request and provide program managers an opportunity to appeal its decisions. In formulating the budget request, agency management should also seek input from outside the agency on issues affecting the agency’s performance. For example, the agency should coordinate with other entities with similar or complementary goals. The agency should justify its budget request in terms of how requested funding levels contribute to achieving agency goals and should inform staff of budgetary actions so that agency management can elicit feedback on performance issues. Similarly, when requesting postappropriations budget changes, the agency should communicate the results to program managers and determine the implications, if any, for achieving agency goals. During budget implementation an agency needs to reconcile appropriated funds and congressional priorities with its earlier budget request and current operating needs. For example, agency management should update its guidance on program goals, performance issues, or resource constraints to reflect significant changes since the formulation of the budget. An agency may need to revisit the priorities established during budget formulation to make informed decisions about how to allocate funding that may be more or less than requested. Similar to formulation, agency management should then use this information to evaluate competing needs and determine final program funding levels. To help managers meet their goals, after appropriations are enacted, the agency should allocate in a timely manner funds needed for program operations. To keep performance on track, the agency should routinely monitor performance, spending, and available resources and make adjustments as needed after obtaining input from program managers on the performance implications. In contrast to Theme 1, which is keyed to the annual budget cycle, the next two themes address an agency’s capacity to produce quality information for decisionmakers during the budget process. An agency’s costs and budgetary resources will change from year to year based on a variety of factors. Often, agencies must grapple with the challenge of achieving performance goals with flat or declining budgetary resources and increasing costs. Theme 2 practices focus on providing decisionmakers with reliable estimates of program costs and budgetary resources to build credible requests for the resources an agency needs to achieve its goals. Theme 2 : Produces Reliable Estimates of Costs and Resources bases its budget estimates on reasonable assumptions about factors affecting program costs or budgetary resources; looks back to assess the accuracy of previous estimates and, if necessary, makes appropriate adjustments to its estimating methods; considers how its policy, program, and funding decisions may affect spending or budgetary resources for other programs within the agency; considers the short- and long-term funding implications of its program or policy decisions. The practices in this theme are premised on the notion that agencies that base their budget estimates on the most up-to-date and reasonable assumptions will be better equipped to make tradeoffs between covering cost increases and meeting other programmatic needs. Those that ignore persistent differences between estimated and actual costs or budgetary resources will face greater uncertainty and have less time to plan for potential funding imbalances. Furthermore, agencies that make an effort to identify how funding decisions that affect one area of spending or budgetary resources might also affect other areas will have more information with which to address unanticipated funding or performance issues that may arise. In addition, decisionmakers need good cost estimates to assess the affordability and desirability of policy and program options that may have long-term cost implications. Theme 3 practices address an agency’s capacity to relate performance, budget, spending, and workforce information. This capacity can facilitate the implementation of Theme 1 practices that involve incorporating performance information into budget decisions, such as requesting program manager input on program performance and funding needs or monitoring program performance and spending and making adjustments to address performance gaps. Results-oriented budgeting implies that an agency has the capacity to relate its budget to its goals. At a minimum, GPRA requires an agency’s performance plan to cover each program activity in the President’s budget request for that agency. To meet this requirement and to make progress toward the goal of integrating agency performance plans and budget requests, OMB guidance states that agencies should display, by GPRA program activity, the funding being applied to achieve the performance goals and indicators for that activity. OMB may also request agencies to provide a crosswalk between performance goals and the specific budget accounts funding those goals. Theme 3 : Can Relate Performance, Budget, Spending, and can relate its budget structure to its goals; can relate budget, workforce, accounting, and performance information; can account for both the direct and indirect costs of its programs and associated goals. OMB encourages agencies to consider changes to their budget account structure that would lead to more thematic or functional presentations of both budget and performance information. An alternative to altering the budget structure is to use cost accounting concepts to capture how appropriated funds are spent according to agency goals. For example, an agency could define its goals as cost objects and distribute the agency’s direct and indirect administrative and program costs against those cost objects through such methods as direct time charging or other valid cost allocation methods. An extension of an agency’s capacity to relate its budget structure to its goals is the capacity to relate and use budget, accounting, workforce, and performance information to formulate and implement the budget. The ability to relate accounting to budget information is fundamental to maintaining control of and accountability for appropriated funds. The capacity to relate performance to budget and accounting information entails establishing a predictable and verifiable relationship between programs, goals, performance indicators, budgets, and spending and being able to report this information in an integrated manner for use by management. Furthermore, information on the agency’s workforce, such as the number of new hires and separations and salary and benefit levels, is critical to estimating and managing the cost of the workforce. Theme 4 practices suggest that agency management should not assume the status quo in the approach it takes to achieving the agency’s goals from one budget cycle to the next. The budget process can provide an opportunity for the agency to review evaluations of its programs and operating methods to help improve results. One method agency management should use to identify opportunities to improve performance is to analyze the full costs of its programs, defined in context, including unit costs where appropriate. For example, when combined with effectiveness measures, unit cost measures can help managers see tradeoffs between competing needs by highlighting the relative costs and benefits produced by different operating units. Agency management should also identify potential alternative sources of funding, if appropriate. For instance, agencies that provide direct services either to segments of the public or to other agencies could consider proposing legislation that would give them authority to charge fees to pay for those services. Finally, agency management should use information about program effectiveness and efficiency, such as program evaluations or benchmarking studies, to challenge existing operating procedures and methods of program delivery and to identify alternatives that may accomplish agency goals more efficiently and effectively. Theme 4 : Continuously Seeks Improvement uses information on program effectiveness, such as program evaluations, to determine if programs are producing desired results with resources provided and identifies alternative approaches that could accomplish agency goals more effectively and efficiently. analyzes the direct, indirect, and, if possible, unit costs of activities to identify opportunities to improve effectiveness and efficiency; and considers the performance and implications of alternative budgetary resources. A key challenge cited by the panel of senior agency budget officials was the difficulty of incorporating agency goals into budgetary decisions given the tight time constraints of the annual budget cycle. The panelists also cited challenges to using performance information for budget decision-making. For example, performance information may not be timely or may not be relevant to new initiatives or goals being proposed. In addition, the panelists spoke of the difficulty of relating performance, budget, spending, and workforce information because goal and performance information does not mesh well with agency budget and accounting information and information systems that could help relate this information are not always available. A detailed list of challenges cited by the panelists appears in appendix II. The next phase of our work will look at how agencies have found innovative ways to address these challenges and implement results- oriented agency budget practices. By sharing examples from these agencies, other agencies may adapt and apply elements of those practices that, ultimately, may improve their ability to manage for results. Theme 1: Performance Informs Budget Formulation and ImplementationFor budget formulation, agency management: 1.a. Provides general guidance to program officials on agency goals, performance issues, and resource constraints. The agency issues to program managers written guidance on budget formulation (sometimes called a “spring planning call” or “budget call”) that sets the reporting requirements and funding targets for program-level budget formulation activities. The guidance contains the major factors program managers need to consider as they prepare their requests for resources. Major factors should include the agency’s goals for the formulation year, performance issues, and funding targets that will constrain program proposals for increased spending. 1.b. Requests input from program officials on the relative priority of new and existing programs and proposed changes to funding levels based on a review of changed costs, performance issues, and other relevant factors. The input should provide information on requested funding levels for each activity. It should also indicate the relative priority of the activities for accomplishing agency goals so that lower-priority activities can be weighed against other on-going or new funding proposals. Estimates should reflect: Annualization of personnel costs: The annual cost of existing staff, including the annualized cost of staff hired during the current fiscal year. Annualization of other recurring costs: Of funding provided for the current year—the annual cost of recurring items, such as rent or ongoing contractual services, in the budget formulation year. Reductions for one-time costs: Of funding provided for the current year—the amount of reductions for items that were one time or time limited in nature, such as new office equipment, higher than normal travel costs, or terminated or completed contracts. Reasonable assumptions: See practice 2.a. Performance issues: How actual performance has compared to goals. The input should describe the reasons for performance that exceeded or fell short of goals and whether and how additional budgetary resources might influence performance against proposed goals. Statutory or other relevant changes: Estimates of costs to implement new legislation or guidance contained in appropriations committee reports. Related Guidance: OMB Circular A-11, Sec. 30. 1.c. Uses the input on relative priorities, changing costs, performance, and other factors to weigh competing needs and decide funding levels for existing and new programs; communicates management’s decisions to program officials; and provides an opportunity to appeal the decisions. The agency collects program managers’ input on priorities and proposed funding levels and uses the input to make judgments about the funding levels to be requested in submissions to the department (if applicable), OMB, and the Congress. Ideally, an agency might rank the competing needs based on their relative contributions to achieving goals. Note that formulation of an agency’s budget request is an iterative process in which requested resources are subject to external scrutiny and change as the agency’s request is first weighed against other department programs and priorities and the department’s request is weighed against other executive branch priorities. Therefore, input obtained from program officials is considered a first step and many other levels of review and decision- making occur before final decisions are made. After collecting and considering program input and making decisions based on the input, the agency communicates in writing its decisions about the funding levels being requested for each program. The agency then allows program officials to provide feedback about the impact of funding increases or reductions on the performance of their programs and to appeal management’s decisions. 1.d. Coordinates with other entities to achieve common goals and avoid duplication. As part of its planning processes, an agency should consider the environment in which it operates, identify other key players that contribute to accomplishing the agency’s mission and goals, and satisfy itself that the agency is not duplicating the efforts of others or missing opportunities to improve performance through cooperation. In formulating its budget request, the agency should incorporate the results of this analysis by allocating resources to areas where performance can be improved through cooperation with other entities and away from activities that duplicate the efforts of others. 1.e. Justifies its budget request both within the agency and externally (e.g., with the department, the Congress, OMB) in terms of how requested funds will contribute to the accomplishment of agency goals. The agency prepares budget justification documents, both for internal and external review, that demonstrate how the agency’s funding requests relate to the accomplishment of its goals. The justification documents should demonstrate how the agency’s funding request would help the agency accomplish the goals in its annual performance plan. The goals in the annual performance plan and the agency’s budget justification should be consistent. The agency should also be prepared to discuss the performance implications of funding levels that differ from the request. GPRA requires an agency’s performance plan to cover each program activity in the President’s budget request for that agency. To meet this requirement, an agency’s performance plan should demonstrate how all of its budgetary resources by program activity are associated with the goals in its annual performance plan. However, an agency’s budget account and program activity structure does not always neatly crosswalk to the goals in its annual performance plan. Therefore, GPRA gives agencies the flexibility to consolidate, aggregate, or disaggregate program activities, so long as no major function or operation of the agency is omitted or minimized. Related guidance: OMB Circular A-11, Secs. 51, 220. 1.f. Informs its staff of departmental, OMB, and congressional actions on the budget request and obtains feedback from program officials on the implications of those actions for agency goals. The agency should continuously monitor departmental, OMB, and congressional actions on the budget request and communicate those actions to staff. For example, agencies can provide timely information by e-mail or through an internal Web site. The agency should also seek input from program officials on the implications of those actions for accomplishing the goals in the agency’s performance plan. For example, to begin contingency planning as soon as possible, an agency might wish to seek input from program officials on actions on the budget request that have significant resource implications, such as those that will require the implementation of a new program or significant staff reductions. 1.g. Provides guidance to program officials on changes in agency goals, performance issues, and resource constraints. Between the time an agency formulates its budget request and the time it implements its budget, many operating assumptions may have changed. For example, there may be legislative changes to programs, new performance issues, or changes in cost assumptions, such as those for rent or health insurance. As the agency prepares to implement its budget, it should issue written guidance to program officials on known or anticipated changes in the agency’s goals, performance issues, and resource constraints since formulation. For example, if anticipated resources are less than requested to achieve the goals in the annual performance plan, the agency should highlight the potential performance gap and begin to address the issue as part of the performance management and budgeting process. Similarly, updated performance information could provide information on where performance is leading or lagging and be useful in planning resource allocation. 1.h. Requests updated information from program officials on the relative priority of new and existing programs and proposed changes to funding levels based on a review of changed costs, performance issues, and other relevant factors. The agency issues written guidance to program managers requesting their input on their funding needs. The agency could set funding targets that impose a reasonable limit on what programs can request. By seeking input from program managers the agency does not assume that all programs will automatically be funded at a maintenance level. The input should provide information on requested funding levels for each activity. It should also indicate the relative priority of the activities for accomplishing agency goals so that lower-priority activities can be weighed against other on-going or new funding proposals. Although we list virtually the same factors here as for formulation (see practice 1.b), the emphasis should be on significant changes in the factors that may affect priority for funding. Estimates should reflect: Annualization of personnel costs: The annual cost of existing staff, including the annualized cost of staff hired during the current fiscal year. Annualization of other recurring costs: Of funding provided for the current year—the annual cost of recurring items, such as rent or ongoing contractual services, in the budget formulation year. Reductions for one-time costs: Of funding provided for the current year—the amount of reductions for items that were one time or time limited in nature, such as new office equipment, higher than normal travel costs, or terminated or completed contracts. Reasonable assumptions: See practice 2.a. Changes in performance issues: How program performance compared to goals for the most recent year available. The input should describe the reasons for performance that exceeded or fell short of goals, and whether and how additional budgetary resources might influence performance against proposed goals. Statutory or other relevant changes: Estimates of costs to implement new legislation or guidance contained in appropriations committee reports. 1.i. Uses the input on performance, goals, and other factors to weigh competing needs and decide existing and new program funding levels; communicates its decisions about funding allocations; and provides an opportunity to appeal the decisions. Because many changes in operating conditions and resource constraints can occur between budget formulation and implementation, an agency will generally need to rethink its priorities and reweigh competing needs to determine the level of funding to be allocated to each program area. Therefore, the agency should consider program managers’ input on proposed funding levels needed to maintain current services and address new program needs and should use the input to make judgments about the funding levels to be allocated. Prior to making final allocations, the agency communicates in writing its decisions about the funding levels being allocated to each program. The agency’s budget process allows program officials to provide feedback about the impact of funding increases or reductions and to appeal management’s decisions. To maximize performance, after appropriations are signed into law, the agency should allocate in a timely manner funds needed for program operations. Advance planning, by enabling an agency to make final funding decisions quickly once funds have been appropriated, is the key to success in this area. Preparing preliminary operating plans based on appropriations actions: As far in advance of the new fiscal year as practical the agency should ask program officials to prepare preliminary operating plans based on preliminary decisions about funding allocations for the upcoming fiscal year. Because final funding outcomes are uncertain at this point, the agency should base its plans on the most likely budget outcome and reserve a portion of the funding to make final adjustments. Adjusting the plans when final appropriations actions take place: After funds have been appropriated, warranted, and apportioned, the agency should quickly determine final funding allocations based on information from the preliminary operating plans. Allocating appropriated funds as soon as possible thereafter: The agency should be prepared to quickly inform program officials of final funding decisions and enter the funding allocations into the agency’s financial management system. Finalize operating plans to be used for monitoring purposes: Program officials submit final operating plans based on the final allocations. 1.k. Routinely monitors performance, spending, and budgetary resources and adjusts allocations as necessary to maximize performance against goals. The agency has processes in place to collect, analyze, reconcile, and report periodically during the fiscal year information on performance, spending, and budgetary resources against plans so that management has credible, up-to-date information for monitoring and decision-making. Such monitoring should form the basis for decisions that address performance gaps by looking for root causes and, if necessary, adjusting funding allocations to rectify performance problems. In addition, the agency should maximize available resources by tracking the availability of unobligated balances and monitoring the status of obligations so funds can be deobligated when they are no longer needed for a given transaction. There should also be some indication that program managers are reconciling accounting transactions on at least a monthly basis. Related guidance: OMB Circular A-34, Secs. 30, 80. 1.l. Uses input from program officials on how changes in funding allocations will affect performance. The agency makes decisions about changes in funding allocations based in part on input from program officials on how the changes will affect performance. For example, the agency should evaluate requests for mid-year increases in funding in terms of their contribution to the agency’s performance. Similarly, decisions to reduce a funding allocation midyear to address other funding priorities should use information on how the reduction will affect program performance and, if appropriate, revise performance targets to reflect reduced funding. 1.m. Coordinates program requests for postappropriations budget changes, requests input from program officials on the implications of those changes for agency goals, and communicates the results. During the fiscal year, an agency may seek supplemental appropriations. If the agency decides or is required to go forward with a request, a number of steps need to be taken. OMB’s Circular A-11 provides guidance on the materials that must be submitted. The agency obtains input from program officials about the effect of proposed budget changes on achieving agency goals and communicates this information in its request for funding changes. As decisions are made, the agency communicates the information to program officials. Related guidance: OMB Circular A-11, Sec. 110. For all practices, we assume that agencies will comply with appropriations and other laws and guidance and respond to OMB and department directions in formulating and implementing their budgets. OMB Circular A-11 provides agencies with guidance on certain basic assumptions about costs to be used in preparing budget requests. For example, while agencies may consider the effects of inflation on their costs, budget requests must stay within the budget planning guidance levels provided by OMB. Regardless of these requirements, however, an agency’s costs and budgetary resources change from year to year based on a variety of factors, and agencies must grapple with the challenge of achieving performance goals while finding funding for programs with increasing costs or declining resources. Agencies that base their budget estimates on the most up-to-date and appropriate assumptions will be better equipped to make tradeoffs between covering these cost increases and other programmatic needs. An agency should thoroughly explore the factors that are most likely to affect program costs and budgetary resources, such as inflation, personnel costs, and program demand. An agency that provides direct services should be concerned about estimating the demand for that service and should use appropriate assumptions about demographic and economic changes. Related guidance: OMB Circular A-11, Secs. 30, 32. 2.b. Looks back to assess the accuracy of previous estimates and, if necessary, makes appropriate adjustments to its estimating methods. Agencies employ a variety of models and other estimating techniques to forecast costs and budgetary resources for budget formulation and implementation. Agencies should be concerned about the accuracy of these models and techniques because inaccurate forecasts can result in higher-than-planned program costs or funding shortages that can affect the agency’s ability to achieve performance goals. To improve the accuracy of its cost or resource forecast, the agency should periodically examine its estimating methods and, if necessary, make changes. For example, persistent variations between planned and actual spending or budgetary resources should be assessed. The agency should also review information from audited financial statements not covered in traditional budget presentations. For example, an agency should consider factoring in the cost of addressing significant accrued liabilities, such as the cost of accrued, unfunded annual leave for eligible retirees. Related guidance: OMB Circular A-34, Sec. 30. 2.c. Considers how its policy, program, and funding decisions may affect spending or budgetary resources for other programs within the agency. The agency should not view individual funding, program, or policy decisions in isolation because they can have ramifications for estimates of the performance, costs, or budgetary resources of other agency programs. For example, technology investments may create savings in a variety of programs. 2.d. Considers the short- and long-term funding implications of its program and policy decisions. When assessing the affordability and desirability of policy and program options, the agency should consider the long-term cost implications of these options while determining how much funding to request in the short term. According to OMB Circular A-11, agency budget requests for acquisition of capital assets must propose full funding to cover the full costs of the project or a useful segment of the project. Failure to provide decisionmakers with adequate information about long-term cost implications may lead to decisions that are based upon incomplete or misleading information, potentially increasing costs or creating inefficiencies. Related guidance: OMB Circular A-11, Sec. 31.4. Theme 3: Can Relate Performance, Budget, Spending, and Workforce Information 3.a. Can relate its budget structure to its goals. GPRA requires an agency’s performance plan to cover each program activity in the President’s budget request for that agency. To meet this requirement, an agency should have a credible method of relating obligations to goals. However, an agency’s budget account and program activity structure do not always neatly crosswalk to the goals in its annual performance plan. To demonstrate the relationship between budget program activities and goals, an agency may have to consolidate, aggregate, or disaggregate program activities. There are several approaches available that provide credible methods for relating obligations to goals. For example, an agency could define its goals as cost objects and accumulate obligations against those cost objects through such methods as direct time charging or other valid cost allocation methods. Related guidance: OMB Circular A-11, Secs. 71, 220; OMB Circular A-123. 3.b. Can relate budget, workforce, accounting, and performance information. For budget formulation and implementation, the agency can relate budget, workforce, accounting, and performance information to support decision-making. For example: Budget reports may show information on program obligations or outlays and their associated goals or performance measures. Accounting information can be rolled up to support budget information. The agency’s accounting system data on spending ties directly to actual budget spending and can be used during budget formulation or implementation. Data from the agency’s standard general ledger can be crosswalked to the agency’s SF-133 Report on Budget Execution and Budgetary Resources and the actual year column of the Program and Financing Schedule in the President’s Budget. Financial and performance systems use uniform terminology and coding and avoid duplicating data entry and the use of supplementary systems. The agency’s budget information systems are linked to performance information so that reports do not require multiple data entry and agency management can readily view information on obligations, outlays, and budgetary resources related to performance. For example, staffing reports link fiscal and performance data. Related guidance: OMB circulars A-11, Sec. 220; A-34, Sec. 50; and A-127, Sec. 7. Treasury Financial Manual Standard General Ledger Supplement. 3.c. Can account for both the direct and indirect costs of its programs and associated goals. The agency has an information system that breaks out spending information into both direct (e.g., program staff, benefits, rent, contracts, or grants) and indirect costs (e.g., overhead services such as accounting or human resources staff or agencywide information technology systems) for the agency’s programs and associated goals. Related guidance: Statement of Federal Financial Accounting Standards No. 4, “Managerial Cost Accounting Concepts and Standards for the Federal Government,” July 31, 1995. Theme 4: Continuously Seeks Improvement 4.a. Uses information on program effectiveness, such as program evaluations, to determine if programs are producing desired results with resources provided and identifies alternative approaches that could accomplish agency goals more effectively and efficiently. GPRA calls for agencies to describe the program evaluations used to establish or revise general goals and objectives and to provide a schedule of future program evaluations. In addition to agency-sponsored evaluations, external assessments by auditors, academics, industry, clients, public interest groups, and others can provide information on the relative effectiveness and efficiency of agency programs. Program evaluations provide agency management an important tool for informing decisions about the tradeoffs between new and existing programs when formulating or implementing the agency’s budget. Furthermore, agency management should seek to improve agency performance and reduce costs by exploring alternative approaches to accomplishing agency goals. To determine whether alternative approaches to the agency’s work provide greater value, the agency could, for example: network with other agencies or their budget offices, benchmark state-of-the-art practices in organizations with similar missions or service delivery mechanisms, and track reforms that could bring about efficiencies if implemented. Related guidance: OMB Circular A-11, Sec. 210. 4.b. Analyzes the direct, indirect, and, if possible, unit costs of activities to identify opportunities to improve effectiveness and efficiency. The agency tracks the direct, indirect, and, if possible, unit costs of its activities and uses this information to compare the cost of its activities to appropriate benchmarks and to bring about improvements in efficiency over time. Agency management uses this analysis to inform decision-making during budget formulation and implementation. Related guidance: OMB Circular A-11, Sec. 30. 4.c. Considers the performance and implications of alternative budgetary resources. The agency explores alternative budgetary resources to accomplish agency goals more effectively and efficiently. For example, agencies that provide direct services either to segments of the public or to other agencies could consider proposing legislation that would give them authority to charge fees (offsetting collections) to pay for the services. We asked the panel of senior agency budget officials to identify practices that would be difficult to implement and to discuss some of the challenges to implementation. The panelists cited general challenges to linking the planning and budgeting processes as well as specific challenges to using performance information in the budget process; reallocating funds to address performance issues; relating budget, performance, and other information; and examining and improving current operations. The following describes the challenges discussed by the panel. Budgeting and planning have different time horizons: The budget process focuses on obtaining funding for the upcoming fiscal year. In contrast, strategic planning has a long-term horizon and establishes goals that can take multiple years to accomplish. Time pressures can drive budgeting: Without top management commitment to results, the budget process, given its tight time constraints, may proceed on its own track and may not result in budget decisions aligned with strategic goals. Budget environment is not always flexible: Agencies operate in an environment where allocations may be restricted by amount or activity so as to limit flexibility in shifting resources to achieve results. Budgets are not usually structured around goals: Performance information does not mesh well with most agency budget and accounting structures because budgets usually are structured around organizations, functions, or programs instead of goals and objectives. Changing budget structures is costly and may inhibit tracking costs from year to year: Adopting a budget structure that is keyed to agency goals implies that the budget structure would need to change over time to reflect changing goals. However, the structure of an agency’s budget needs to remain relatively stable to track costs consistently from one year to the next and to avoid ad hoc agency reporting or costly changes to financial systems. Input from program officials can be inhibited: In an agency where management is accustomed to making decisions in a top-down manner, decisions may not reflect input from program officials or other staff offices and may instead reflect management priorities unrelated to program performance goals. Expectation gaps may be created: Obtaining input from program managers on their funding priorities related to performance may create expectation gaps because an agency must weigh the input and make tradeoffs that reflect agencywide, rather than program-level, priorities. Performance information may not be relevant to new initiatives: Changes in top management’s priorities or agency goals can reduce the relevance of prior performance information for budget decisions. Performance information may not be timely: For example, an agency formulating its budget for fiscal year 2003 must submit its request to OMB by September 2001. As of that date, the last full year of performance information is fiscal year 2000—three years behind. Lack of outcome information: The lack of good information on the relationship between funding and outcomes makes it difficult to assess whether funds are allocated or reallocated effectively—in turn making it difficult to determine whether changes in funding allocations will make a difference in performance. Reprogramming restrictions: Some agencies have reprogramming restrictions that may inhibit aligning resources to goals. Cultural resistance to reallocations: There may be a cultural resistance to reallocating program funds to address performance issues elsewhere in the agency. Ad hoc approaches often used: The agencies represented at the panel generally have not chosen to integrate performance information with budget and spending data, and when they have, they have used ad hoc approaches. Crosswalks of limited use: Some found that building crosswalks between budget accounts and agency goals was of limited use to the agency and appropriators because funding decisions were keyed to the functions or organizations in the budget instead of agency goals. Planning and budget functions not integrated: Agencies that have not integrated their planning and budget functions may have difficulty aligning budget and planning information and providing integrated guidance to program managers or other staff offices on performance and budgeting issues. Information systems not always available: For example, budget officials are accustomed to producing timely and useful information on spending against plans, but performance information reports are more sporadic and not easily linked to spending information. Cost of information systems may be prohibitive: The expense of developing and implementing new information systems might be prohibitive because funding for information technology initiatives is difficult to obtain. Indirect costs are difficult to attribute to goals: It can be difficult to attribute indirect costs, such as information technology or rent, to agency goals. Quality of agency estimates not always basis for decision- making: It may not be useful to try to perfect spending estimates, particularly for budget formulation, because the department or OMB can reduce funding regardless of the quality of the estimates. Agencies may lack capacity for program evaluations: Agencies may lack the capacity and resources to perform their own program evaluations because such evaluations can be costly and time consuming and agencies lack staff to do them. Alternative revenue sources may be unavailable: The availability of alternative revenue sources may be limited for some agencies because they are restricted by statute from charging fees and have difficulty persuading the Congress to adopt user fees. Difficult to evaluate programs implemented by third parties: Agencies, such as regulatory agencies, that rely in part on third parties to accomplish their goals may have more difficulty evaluating the effectiveness of funds spent because the agency has limited control over the actions of the third parties. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
GAO analyzed federal government budget practices in order to produce a framework for agency budget practices that can guide an agency toward incorporating performance information into the budget process. GAO also reviewed challenges to implementing results-oriented budget practices that were identified by a panel of agency budget officials.
The Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac)(referred to in this report jointly as the enterprises) are government-sponsored enterprises that play important roles in federal support of home ownership and America’s housing finance system. The primary role of the enterprises is to ensure that mortgage funds are available to home buyers in all regions of the country at all times. Congress has asked us to study the desirability and feasibility of repealing the federal charters of the enterprises, eliminating any federal sponsorship of the enterprises, and allowing the enterprises to continue to operate as fully private entities. The enterprises help ensure that mortgage funds are available to home buyers by buying mortgages from mortgage originators, such as savings and loans (thrifts), commercial banks, and mortgage bankers. The enterprises hold some of these mortgages in portfolio as direct investment on their own books and issue debt and equity securities to finance these holdings. Most mortgages that the enterprises buy from mortgage originators are “securitized”—that is, the enterprises package them into mortgage pools to support mortgage-backed securities (MBS). These mortgage pools receive the interest and principal payments from the mortgages in the pools and pass them on to the investors who purchased the MBS. The enterprises guarantee the timely payment of principal and interest payments from the mortgages in the pools to the investors and administer the payments. In September 1995, Fannie Mae and Freddie Mac either owned in portfolio or guaranteed about $1.3 trillion of the $3.9 trillion of outstanding residential mortgages in the United States. The enterprises are government-sponsored in that they operate under federal charters that convey certain benefits, impose certain restrictions, and permit the enterprises to earn a profit while serving public policy purposes, such as providing liquidity to mortgage markets. In 1992, Congress expanded the enterprises’ public purpose by requiring annual goals that are to be set, monitored, and enforced by the Department of Housing and Urban Development for the purchase of mortgages on housing purchased by very low-, low-, and moderate-income and other households that are underserved by the residential mortgage market. The enterprises’ charters exempt them from certain fees and taxes paid by other private sector firms. At the same time, the charters restrict the enterprises to buying mortgages that do not exceed a set dollar amount, known as the conforming loan limit. A major factor that enhances the enterprises’ profitability is the financial market’s perception that there exists an implied federal guarantee of their debt and other obligations (i.e., a perception that the federal government would act to ensure that the enterprises will always be able to meet their financial obligations on their debt and MBS guarantees). Investors perceive that this implied guarantee decreases the risk that the enterprises will ever fail to meet their financial responsibilities. Consequently, this perception lowers the enterprises’ borrowing costs because investors are willing to accept lower expected returns on enterprise debt than they would for private firms without government ties. Likewise, funding costs on MBS are also lowered by this perception. Their lower funding costs allow the enterprises to increase their purchases and give them a cost advantage over competitors. This perception of a federal guarantee remains even though the laws chartering the enterprises contain explicit language stating that there is no such guarantee. The perception of the implied guarantee is based on special federal ties to the enterprises, including government-sponsored status, each enterprise’s $2.25 billion conditional line of credit with the Treasury Department, and a belief that the federal government would consider such large institutions too big to fail. The federal charter also provides several explicit provisions that lower operating costs for the enterprises. For example, certain fees paid by other corporations to the Securities and Exchange Commission (SEC) are not levied against the enterprises since the enterprises do not need to register their issuances with the SEC. They are also exempt from state and local income taxes. In addition, they can use the Federal Reserve’s electronic payments system for transactions. These privileges, plus each enterprise’s $2.25 billion conditional line of credit with the Treasury, reinforce the market’s perception that the government will not let the enterprises fail. Given the lower funding costs created by this perception and the lower operating costs created by certain privileges and exemptions, the enterprises have cost advantages over any potential direct competitor. The mortgage market is made up of primary and secondary parts; and many institutions serve several roles within the overall market, as shown in tables 1.1 and 1.2. Consequently, institutions sell to, buy from, and compete with each other. As shown in table 1.2, the enterprises function as conduits and guaranteeing agencies in the secondary mortgage market. In the primary market, the home buyer applies to an originator for a mortgage. The originator can be a depository, such as a bank or thrift, or a mortgage banker. Traditionally, depositories originated mortgages and held them as direct investments in portfolio on their books. Their profits from holding mortgages were the difference between interest earned from the mortgages and their costs of funds, primarily interest paid to depositors after adjusting for other expenses. Mortgage bankers originate mortgages for immediate resale in the secondary market. They earn profits primarily from two sources. The first source is fees charged to originate mortgages and profits from the sale of mortgages (losses can also result from such sales). The second source is fees investors pay to mortgage bankers for “servicing” mortgages—collecting and processing mortgage payments. In recent years many depositories have also acted like mortgage bankers in that they originate mortgages and sell them to investors rather than hold them on their books. Mortgage insurers improve the liquidity of the market by compensating investors for losses caused by mortgage defaults—losses created when the net sales price of the house after foreclosure does not cover the outstanding balance on the mortgage. This compensation reduces risks and makes the market more liquid. FHA and VA are the primary federal government insurers. The private mortgage insurance companies provide insurance for conventional mortgages—that is, mortgages not backed by the federal government. The secondary mortgage market channels mortgages from originators to investors. The Government National Mortgage Association (Ginnie Mae), the enterprises and other private companies, acting as conduits, create mortgage pools and MBS that are sold to investors. From a pool of mortgages, the MBS investors receive their proportional shares of interest and principal flows. Private-label MBS are created by fully private (nongovernment-sponsored) conduits. As of September 1995, private-label MBS totalled about 13 percent of outstanding MBS. The mortgages that these private conduits securitize either exceed the enterprises’ conforming loan limit—$207,000 on one-unit, single-family properties—or do not meet the enterprises’ underwriting standards. A loan whose underwriting standards do not meet the standards of either enterprise or exceeds the conforming loan limit is called a nonconforming loan. A loan that exceeds the conforming loan limit is called a jumbo loan. Guarantees on MBS enhance the liquidity of the secondary market. Ginnie Mae guarantees timely payment of principal and interest for mortgage pools of FHA and VA insured mortgages for a fee. The enterprises and private-label conduits guarantee timely payment of principal and interest on conventional mortgages in pools backing their MBS. The guarantees are an enhancement that reduces the risk that any given mortgage will not be paid on a timely basis. Private-label conduits generally use risk-based guarantee fees, which are based on the expected incremental cost of guaranteeing a particular level of credit risk exposure for the investor. For example, the conduits charge lower fees on mortgages with large down payments (i.e., mortgages with low loan-to-value ratios) than on loans with small down payments. The enterprises said that their mortgage commitment policies move them partially, but not fully, toward a risk-based fee structure. Private-label conduits may enhance the liquidity of their MBS with other credit enhancements. Private mortgage insurance is a common form of credit enhancement to reduce risk. Another common private label credit enhancement is over-collateralization. This means that the principal amount of the mortgages backing the MBS exceeds the dollar amount of the MBS shares sold to investors. Other forms of credit enhancement for the private-label MBS include bank letters of credit, corporate guarantees,and private insurance of mortgage pools. The cash flows to investors generated by interest and principal payments from any pool may vary over time. As interest rates fall, households tend to prepay principal more quickly, which is called prepayment risk. As interest rates increase, prepayments tend to slow down and cash flows to investors decline, since homeowners are refinancing or selling their houses more slowly. This tendency is called extension risk. To address the need for more predictable cash flows, the enterprises and private-label conduits issue multiclass mortgage securities called collateralized mortgage obligations (CMOs) and Real Estate Mortgage Investment Conduits (REMICs). These multiclass securities can help investors better manage prepayment and extension risks by creating from the same mortgage pool several securities that receive different parts of the pool’s interest and principal payments. Investors more concerned about variations in cash flows over time can buy the classes that pay off more quickly or have a fixed payment period. Investors more willing to undertake prepayment and extension risks can buy classes with payments that vary with interest rates. The expected return on classes with prepayment and extension risks exceeds the expected return on classes without such risk. Multiclass securities can also redistribute credit risk so that one class can be designed to absorb all or much of the credit risk in return for a higher expected return. Because multiclass securities bring new investors who wish to avoid unpredictable cash flows into the market, they improve the market’s liquidity and help ensure continuing funding for home mortgages. The new investors that multiclass securities have attracted include banks, thrifts, pension funds, insurance companies, and other financial institutions as well as individuals who originate, buy, hold, or sell whole mortgages. Tables 1.3 and 1.4 show the different sectors of the housing finance system. The total of all residential mortgage debt in September 1995 was $3.9 trillion. About 6 percent was held by the enterprises in portfolio, 33 percent was held in portfolio by wholly private financial institutions, 45 percent was securitized and held by various types of investors in MBS, 1 percent was held by the federal government or related agencies, and the rest was held by individuals and other investors. Commercial banks and thrifts were significant holders of whole mortgages and MBS. In September 1995, banks held about 33 percent of whole mortgages and savings and loans held about 26 percent of all whole mortgages. At year-end 1995, commercial banks held 20 percent of all MBS and thrifts held about 10 percent of MBS. Other major investors included life insurance companies and mutual funds. The enterprises have evolved since Congress created Fannie Mae to remedy the housing market effects of the Great Depression of the 1930s and Freddie Mac was created in 1970. Modifications in their charters have occurred as the result of changing economic conditions and government policies. In the wake of the Great Depression of the 1930s, the federal government took steps to revive the economy, stabilize financial markets, and ensure mortgage markets were liquid. The government’s response concentrated on the savings and loan industry, which was then the backbone of the housing finance system. Congress created a thrift regulator to ensure the safety and soundness of the thrift industry; to bolster consumer confidence and keep deposits flowing into the thrifts, it created a deposit insurance system. In addition, Congress created the Federal Home Loan Banks, which borrowed in capital markets and made loans to thrifts so that they could continue to fund and originate mortgages. To further support housing, Congress created FHA, which insured mortgages originated by private financial institutions and reduced credit risk for investors. Congress also authorized the establishment of private mortgage associations to create a secondary market for mortgages. Because private mortgage associations did not develop, Congress chartered Fannie Mae in 1938 as a government-held association to buy and hold mortgages insured by FHA. Later it was authorized to purchase VA-insured mortgages. In its early years, Fannie Mae was part of the Reconstruction Finance Corporation and subject to the regulation of the Federal Housing Administration. Modifications in Fannie Mae’s structure occurred during the post-War period without changing its fundamental mission. In the early post-World War II period, Congress articulated Fannie Mae’s purposes as providing liquidity and special assistance for selected housing types, supporting the mortgage market, and stabilizing the economy. Fannie Mae’s mortgage purchases increased substantially during most of the 1950s. During the late 1950s through the mid-1960s, Fannie Mae sold mortgages when other sources of credit were readily available or purchased mortgages when credit was tight. After 1968, Fannie Mae’s, and later Freddie Mac’s, portfolios grew. In the Housing and Urban Development Act of 1968, Congress split Fannie Mae into two components. One component, Ginnie Mae, remained in HUD to provide support to FHA, VA, and special assistance programs. The other component was the government-sponsored, privately owned, for-profit Federal National Mortgage Association, which was to be concerned exclusively with attracting funding into residential mortgages. Thus, the newly private, yet government-sponsored, Fannie Mae continued to provide a secondary residential mortgage market and was governed by a board of directors dominated by its private sector owners with a minority of its members (5 of 18) appointed by the president. Fannie Mae was regulated by the Department of Housing and Urban Development in terms of capital requirements and approval of new mortgage acquisition programs. Ginnie Mae and Fannie Mae operated differently. Ginnie Mae did not purchase mortgages. Instead, it “guaranteed the timely payment of principal and interest” from pools of FHA- and VA-insured mortgages originated by mortgage bankers and other financial institutions. In contrast, Fannie Mae operated as a large portfolio investor. It bought mortgages from originators and financed these investments by selling debt and equity in the financial markets. Congress permitted Fannie Mae to develop a secondary market for conventional loans to counter periodic scarcities of mortgage credit in different regions of the country during different parts of the business cycle. Consequently, Fannie Mae helped counter a scarcity of mortgage credit during the late 1960s and early 1970s, when interest rates paid by thrifts and other depository institutions were capped—sometimes below market levels. In response to these below-market rates, depositors withdrew funds and looked for higher returns elsewhere. As funds were withdrawn, thrifts were unable to originate or fund mortgages. At the same time, other originators such as mortgage bankers were able to originate mortgages at market rates and sell them to Fannie Mae. Since Fannie Mae did not have an interest rate cap, it could raise funds at market rates and thus continue to purchase mortgages at current market rates from all originators. Congress chartered Freddie Mac in 1970 in reaction to the loss of deposits in the savings and loan industry that was curtailing that industry’s ability to fund and originate home mortgages. Its creation ensured that the savings and loan industry had access to funds to continue to fund mortgages. Freddie Mac was first owned by the Federal Home Loan Bank Board, which regulated savings and loans, helped fund their operations through the Federal Home Loan Banks, provided deposit insurance to the thrifts through the Federal Savings and Loan Insurance Corporation, and liquidated insolvent thrifts. Freddie Mac mostly securitized the mortgages that it purchased and guaranteed timely interest and principal payments from the resulting mortgage pools. Originally, the enterprises and FHA had identical conforming loan limits for mortgages they could purchase or guarantee. In 1974, Congress raised the conforming loan limit for both enterprises above FHA’s limit. Consequently, Fannie Mae and Freddie Mac could buy an increasing share of mortgages that were not provided by, or guaranteed by, the federal government. In 1981, Congress created a formula for adjusting the conforming loan limit to account for the effects of inflation on house values. A three-tiered secondary mortgage market evolved in the late 1980s. Ginnie Mae primarily served a tier of lower value FHA and VA mortgages. The enterprises primarily served a middle tier of larger mortgages. The private-label conduits served a tier of jumbo—loans with principal amounts that exceeded the conforming limit—and other conventional, nonconforming mortgages. In the early 1980s, Fannie Mae and Freddie Mac experienced different financial results as short-term interest rates increased. Fannie Mae held mortgages in portfolio and funded them with short-term debt. As rates increased, Fannie Mae had to issue new short-term debt at higher rates to replace existing short-term debt that came due. Because interest earned on the old mortgages in portfolio was less than interest expenses on the newly issued debt, Fannie Mae experienced total losses of about $277 million between 1981 and 1984. In response to Fannie Mae’s financial problems, the federal government provided limited tax relief and regulatory forbearance in the form of relaxed capital requirements. Unlike Fannie Mae, Freddie Mac held few mortgages in portfolio and issued little debt to fund mortgage holdings. Rather, it created MBS and sold them to investors. Consequently the investors and not Freddie Mac bore the risks of changing interest rates. To avoid future losses from interest rate changes, Fannie Mae partially adopted Freddie Mac’s strategy of issuing MBS and passing interest rate risk to investors. The unexpected increase in interest rates in 1979 through 1981 that created problems for Fannie Mae also contributed to the failure of many thrifts in the 1980s. As interest rates rose, many thrifts became unprofitable, and some thrifts hoping to regain profitability undertook risky investments as their losses grew. In many of these cases, such actions accelerated and increased losses to the thrift deposit insurance fund, the Federal Savings and Loan Insurance Corporation (FSLIC). At the same time, FSLIC did not have the resources to close all insolvent thrifts. As the weakened thrifts deteriorated further, closure costs continued to increase. In 1989, Congress abolished the Federal Home Loan Bank Board and dispersed its functions to other agencies. The Office of Thrift Supervision became the regulator of federally chartered savings and loans. Freddie Mac became a government-sponsored enterprise owned by private investors. Deposit insurance for thrifts went to the Savings Association Insurance Fund under the Federal Deposit Insurance Corporation (FDIC), and the Resolution Trust Corporation (RTC) was created to close and liquidate insolvent thrifts that were still open when RTC was created. As thrifts failed and the thrift industry’s originations and holdings of mortgages decreased, mortgage bankers originated more mortgages and mortgage conduits increased their issuance of MBS. As shown in figure 1.1, the importance of mortgage bankers as originators increased as that of thrifts decreased. In 1982, thrifts originated 35.9 percent of all mortgages on 1-4 family units (commonly called single-family units); however, their share had dropped by 1994 to 15.9 percent. In 1982, the mortgage bankers originated 28.9 percent of all mortgages for single-family units, and by 1994, their share had increased to 52.8 percent. Mortgage originations were no longer strongly tied to the thrift industry. Not only did thrifts become less prominent as originators, they also held less mortgage debt directly in portfolio. As shown in figure 1.2, the conduits and especially the enterprises became an increasingly important mechanism for channeling residential mortgage funds. In 1982, thrifts held in portfolio 36.6 percent of all outstanding mortgages on single-family units (their holdings of MBS were not reported). By 1994, thrifts held directly, in portfolio, only 14.3 percent of outstanding mortgages on single-family units. Much of this shrinkage of direct mortgage holdings was accounted for by the growth of the enterprises’ activities. By 1994, the enterprises held in portfolio 6.7 percent of all mortgages on single-family units, and their MBS represented 29.5 percent of the outstanding single-family unit mortgages. However, the thrifts continued to hold mortgages indirectly since they held MBS created by the enterprises and other conduits. The effect of the 1992 Act, in combination with the GSE-related provisions in the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), was to make the charters of the enterprises substantially the same. Provisions of the enterprises’ charters, which remain in force today, include the following broad public policy purposes: provide stability in the secondary market for residential mortgages; respond appropriately to private capital markets; provide ongoing assistance to the secondary market for residential mortgages (including activities relating to mortgages for very low-, low-, and moderate-income households involving a reasonable economic return that may be less than the return earned on other activities) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for mortgage financing; and promote access to mortgage markets throughout the nation (including central cities, rural areas, and underserved areas) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. In the 1992 Act, Congress created the Office of Federal Housing Enterprise Oversight (OFHEO). OFHEO was to regulate the enterprises for safety and soundness and set capital standards for the enterprises, and HUD was authorized to establish, monitor, and enforce mortgage purchasing goals for the enterprises. Although the enterprises are privately owned and for profit, their charters impose restrictions and confer benefits that affect their ability to make profits. The enterprises are specifically authorized to deal in conventional residential mortgages under the conforming loan limit; other kinds of business are not so authorized. Other restrictions include goals set by the Secretary of HUD for the dollar volume of mortgages that the enterprises must purchase from very low-, low-, and moderate-income households and underserved rural and urban areas. The benefits provided to each enterprise include a $2.25 billion conditional line of credit with the U.S. Treasury; an exemption from paying state and local corporate income taxes; an exemption from registering their securities with the Securities and Exchange Commission (SEC), which means they do not pay SEC fees; and the ability to use the Federal Reserve as a transfer agent, which enhances the enterprises’ operating efficiency. Although operating with the restrictions and benefits established by the government, the enterprises have been consistently profitable since the mid-1980s. (See table 1.5.) As of year-end 1995, Fannie Mae’s assets exceeded $316 billion, and Freddie Mac’s $137 billion. In addition, Fannie Mae’s outstanding mortgage holdings exceeded $252 billion, and Freddie Mac’s exceeded $107 billion. Although Freddie Mac historically retained relatively fewer mortgages than Fannie Mae, Freddie Mac has in recent years increased its share of mortgages held in portfolio. At year-end 1995, Fannie Mae mortgage holdings were about 80 percent of its total assets and 23 percent of its total mortgage servicing portfolio—the sum of mortgage holdings and MBS outstanding. Freddie Mac’s mortgage holdings were about 78 percent of its total assets and 19 percent of its total servicing portfolio. In 1995, Fannie Mae’s return on equity was 19.53 percent and Freddie Mac’s was 18.60 percent. In 1995, Fannie Mae’s equity ratio (equity divided by the sum of total assets and MBS outstanding) was 1.32 percent, and Freddie Mac’s was .98 percent. While earning profits, Fannie Mae and Freddie Mac must deal with four major types of risk: business, interest rate, credit, and management risk. Business risk is the possibility of financial loss due to conditions within the market or markets in which a firm operates. Because the enterprises serve the secondary market for conforming mortgages, their financial health depends on the factors that create a healthy secondary market for such mortgages. If profits decline or risks increase in this limited market, the enterprises cannot avoid associated problems by exiting their current market and entering new markets. Interest rate risk is the possibility of financial loss due to changes in market interest rates. Movements in market interest rates can affect interest expenses, interest earnings, prepayments by homeowners, and the value of assets and liabilities on the balance sheet. Rising market interest rates increase interest expenses as debt turns over and decrease the value of existing assets that are paying a below-market rate. As discussed earlier, Fannie Mae experienced this problem in the early 1980s as its interest expenses increased and interest earnings on its existing pool of mortgages were relatively constant. When market interest rates decline, homeowners tend to prepay mortgages more quickly, resulting in a decrease of the net average interest rate received by the enterprises on mortgages held in portfolio. The net rate decreases even if new lower rate mortgages are bought by the enterprises as long as the interest rate paid on outstanding debt does not change. At the same time, if prepaid mortgages are not replaced with new lower rate mortgages, the enterprises’ outstanding debt balance could exceed their mortgage balances. Whether or not the enterprises replace prepaid mortgages with new lower rate mortgages, they face interest rate risk. The enterprises limit interest rate risk in several ways. First, the enterprises avoid interest rate risk by passing it to investors when they create MBS. Second, both enterprises limit interest rate risk by issuing callable bonds that can be paid off early if rates fall. By calling the bonds and issuing new debt as interest rates fall, the enterprises curtail interest rate expenses. Conversely, if rates increase, the enterprises continue to pay below-market rates on their existing bonds. Callable bonds are one example of how the enterprises manage their liabilities to hedge interest rate risk associated with their asset holdings. The enterprises have also developed other methods, including certain derivative products, to control their interest expenses as the economy varies. Credit risk is the possibility of financial loss resulting from default by homeowners on housing assets that have lost value. Credit risk on mortgages is the possibility that mortgages will go into default, and the net recoveries from selling the property and collecting private mortgage insurance will not cover outstanding balances. This risk occurs when the enterprises hold mortgages in portfolio and when they guarantee principal and interest payments to investors in their MBS. Primary determinants of credit risk are the homeowner’s payment burden, the homeowner’s creditworthiness, the size of the down payment, and the existence of private mortgage insurance. The first three factors affect whether the applicant can and will make timely mortgage payments. The size of the down payment and the existence of private mortgage insurance (PMI) affect the size of any loss in the event of a default. A larger down payment and PMI increase the likelihood that the house can be sold after foreclosure for an amount that is sufficient to recover the outstanding mortgage balance. Management risk is the possibility of financial loss resulting from a management mistake that can threaten the company’s viability. Careful oversight by the company’s board, stockholders, financial markets, and regulators can help ensure that management risk is adequately controlled. Section 1355 of the 1992 Act mandated us, the Congressional Budget Office (CBO), the Department of Housing and Urban Development (HUD), and the Department of the Treasury to separately study and report on “the desirability and feasibility of repealing the federal charters of the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation, eliminating any federal sponsorship of the enterprises, and allowing the institutions to continue to operate as fully private entities.” This report is our response to that mandate. We and the other agencies were directed to examine the effects of privatization on the requirements imposed upon and costs to the enterprises, the cost of capital to the enterprises, housing affordability and availability and the cost of homeownership, the level of secondary mortgage market competition subsequently available in the private sector, whether increased amounts of capital would be necessary for the enterprises to continue operation, the secondary market for residential loans and the liquidity of such loans, and any other factors each of the agencies deemed appropriate. In addition to the legislative mandate, we had discussions with staff of the Subcommittee on Capital Markets, Securities, and Government Sponsored Enterprises of the House Committee on Banking and Financial Services. In these discussions staff asked us to evaluate alternative policies other than privatizing the enterprises. To respond to the mandate and the Subcommittee staff request, we developed a list of the economic behaviors most likely to be affected by privatization, assessed how well such adjustments can be quantified, and analyzed the probable outcomes resulting from privatization. The results of our analysis are presented in this report in terms of three principal objectives. These objectives were to assess the potential effects of privatization on (1) the enterprises; (2) residential mortgage markets in general; and (3) housing finance, homeownership, and housing affordability for very low-, low-, and moderate-income families and residents of underserved areas in particular. In addition, we identified and analyzed, in response to the Subcommittee’s subsequent request, four policy alternatives that Congress could consider to limit the enterprises’ potential risk to taxpayers or increase their social benefits. To determine how the enterprises and housing finance markets would react to privatization of the enterprises, we reviewed academic, professional, and business literature on the role of the enterprises in mortgage markets. This review identified several ways the enterprises could be affected by privatization and how markets may evolve and change. We interviewed market participants, such as mortgage bankers, private mortgage insurers, mortgage security underwriters, bond rating agencies, and private-label mortgage conduits, to gain their insights into how the market might perform if the enterprises were to be privatized. We interviewed additional individuals with expertise in mortgage markets, including analysts at the Federal Reserve Board, and current and former HUD staff members. We also interviewed representatives from the enterprises to obtain their perspectives on the effects of privatization. We participated with the Congressional Budget Office (CBO), HUD, and the Treasury in commissioning five studies on different aspects of privatization. The authors of these studies presented findings at seminars attended by representatives of the four agencies and the enterprises as well as discussants who were invited to provide comments. We had extensive interactions with the authors, both within and outside of the seminars, to evaluate their methodologies and results as needed. We did not, however, verify their data. We used the studies, the material discussed at the seminars, and comments prepared by the discussants and the enterprises as an additional source of information in preparing this report. The studies and written comments by discussants and Fannie Mae will be published by HUD in Studies on Privatizing Fannie Mae and Freddie Mac (forthcoming May 1996) and do not necessarily reflect the opinions of GAO or the other agencies. We also relied on data and information contained in annual and investor analyst reports over the past 6 years published by the enterprises, information statements and prospectuses provided by the enterprises and private-label conduits, studies and statistical tabulations provided by the enterprises, and other information provided by parties we interviewed. We obtained documentation and evaluated the data and information as needed, but we did not verify these data. We conducted our work in Washington, D.C., from March 1994 through December 1995 in accordance with generally accepted government auditing standards. In addition, we provided copies of the draft of this report to the Chairmen of Fannie Mae and Freddie Mac. On April 26, 1996, we met separately with senior enterprise officials, which included senior vice-presidents from each agency, and they provided oral comments, which are presented and discussed on pages 49-53, 70-77, and 94-97. One Freddie Mac official said that we relied on work performed by others that we did not verify, and therefore we should make clear when estimates by others were used. We have clarified how we evaluated and relied upon the five studies on different aspects of privatization as well as the data and information supplied by the enterprises and others. Assuming that privatization eliminates the perception by investors of an implied federal guarantee of the enterprises’ financial obligations as well as explicit charter benefits, the enterprises’ overall annual costs would increase substantially. Based on 1995 financial statements and operations of the enterprises, total cost increases on a pretax basis could have been in the range of $2.2 billion to $8.3 billion. The largest increase, probably in the range of $1.3 billion to $4.4 billion, would likely have been in an expense that has represented in recent years more than two-thirds of total expenses of each of the enterprises—the interest the enterprises pay on their debt securities. Without the perception of an implied federal guarantee, investors would likely require higher interest rates on the enterprises’ debt securities to make up for the perceived increase in risk. For the same reason, the enterprises would also have higher funding costs on the MBS they issue. In addition, increased overhead and operating expenses would result from the elimination of the enterprises’ exemption from SEC registration requirements and state and local corporate income taxes. The increased costs would likely lead the enterprises to change their operating strategies and activities so that they would probably resemble more closely the strategies of private-label conduits. In addition, they could enter into new lines of business, both within and outside of the housing finance industry. On the other hand, if the markets’ perception about the implied guarantee does not change, or changes very little, the effect of privatization on the enterprises’ costs would be limited largely to expenses related to SEC registration requirements and state and local corporate income taxes. The primary effect of privatization in this case may be the enterprises’ increased opportunities to enter new lines of business. Therefore, the effect of privatization largely depends on the markets’ perception of the riskiness of the enterprises’ debt securities and MBS following privatization. As shown in the 1995 income statements of the enterprises (see table 2.1 for summary information), interest income and expenses dominated the enterprises’ finances. In 1995, the enterprises’ major sources of income were interest earned on mortgages retained in portfolio and guarantee fees on MBS. Interest income provided 94.7 percent of Fannie Mae’s total revenue and 88.2 percent of Freddie Mac’s. Interest income was relatively higher at Fannie Mae because it retained in portfolio a relatively large proportion of mortgages it has bought—33 percent, compared to Freddie Mac’s retention of 18 percent (see tables 2.2 and 2.3.) Expenses were dominated by interest paid on debt securities. As shown in table 2.1 (under Total interest expenses), this expense represented 81.0 percent of Fannie Mae’s total revenues and 73.5 percent of Freddie Mac’s. Debt securities are the enterprises’ primary source of financing. For this reason, the enterprises’ funding costs are driven primarily by the interest paid on debt securities. Due to the importance of interest income and expense in the financial condition of the enterprises, the most important advantage of the enterprises’ government-sponsored status is the perception of financial market participants that the federal government is likely to act to ensure that the enterprises will meet their debt and MBS obligations. The perceived federal guarantee lowers the enterprises’ funding costs in two primary ways. First, it decreases perceived risk for investors in the enterprises’ debt and MBS; this lowers the funding costs that the enterprises must pay. Consequently, the enterprises pay interest rates on their debt that are above the rates that the Treasury pays and below the rates paid by highly rated financial corporations on similar debt. The second way that the perceived federal guarantee lowers the enterprises’ funding costs is that it decreases the extent to which the enterprises must fund themselves with relatively more expensive equity capital—the difference between assets and liabilities. Equity serves as a financial cushion that can absorb financial losses in bad years. Investors in a corporation’s debt require this cushion because it can help ensure the continued operation of the company when downturns occur. The amount of equity a firm needs to maintain a high debt rating depends on financial risk; if risk is relatively high, equity must be correspondingly high. Because the perceived federal guarantee lowers investors’ perceived financial risk, the enterprises are able to hold less equity and fund more of their operations through issuing debt securities, compared to potential private competitors. A further advantage of government sponsorship is that bond rating agencies and bank regulators consider the enterprises issuers of low-risk debt on the basis of their perceived government ties. This ensures that the enterprises’ debt securities and MBS can be bought and held by a large class of investors that must invest in high-grade securities. These investors include banks, insurance companies, and other regulated institutions, which provide a ready and consistent outlet for enterprise debt and MBS. The last funding advantage is that most investors realize that the very size of the enterprise ensures a ready market for reselling enterprise debt securities and MBS. Government sponsorship does not in itself guarantee large size. However, the combination of a multibillion-dollar mortgage market, the financial cost advantage arising from the perception of government backing, and the fact that only two organizations have been granted these advantages contribute to the enterprises’ size. This marketability or liquidity further lowers the enterprises’ funding costs since investors know they can readily resell the securities if they need cash quickly. Consequently, investors do not require higher interest rates on enterprise debt issuances due to the risk that they cannot be resold in a liquid market. The large size of the enterprises’ operations may also lower their average operating costs per MBS or per mortgage due to economies of scale. Other benefits that derive directly or indirectly from the federal charters and lower the enterprises’ operating costs include a conditional line of credit of up to $2.25 billion available for each enterprise from the U.S. Treasury at Treasury’s discretion; exemption from registering securities or paying fees to SEC; ability to issue debt and MBS that the Federal Reserve, other bank regulators, and bond rating agencies consider high-quality, low-risk paper; and use of the Federal Reserve as a transfer agent, which enhances operating efficiency. The combined funding and operating cost advantages, along with any additional efficiencies arising from sound management practices, help ensure that the enterprises are the lowest cost participants in the secondary conforming mortgage market. In effect, the advantages flowing from government sponsorship make it difficult if not impossible for other companies to compete in the secondary market for conforming loans. Assuming that privatization causes the market to no longer perceive an implied guarantee by the government or perceived it to be substantially weakened, the market would in turn likely demand a higher payment on debt and MBS. We used the 1995 financial statements and operations of the enterprises to estimate the dollar benefits of government sponsorship in funding costs on debt securities and MBS. The extent to which the savings of sponsorship flow to the enterprises, borrowers, and investors is unknown; we discuss impacts on borrowers in chapter 3. We estimated, using conservative measures of the enterprises’ funding advantages resulting from government sponsorship, that the total benefit in reduced interest costs the enterprises paid in 1995 on debt securities was in the range of $893 million to $1.3 billion, with the amount depending upon how the enterprises would treat cost increases resulting from privatization on their federal tax returns. We estimate the total combined benefit in funding costs the enterprises received on MBS was in the range of $343 million to $486 million. We also estimated, using higher measures of the enterprises’ funding advantages, that the total benefit on debt was in the range of $3.2 billion to $4.4 billion and was in the range of $2.4 billion to $3.4 billion on MBS. In one of the studies done for this project, Ambrose and Warga estimated the current funding advantage of government sponsorship for enterprise fixed-rate debt. They used two approaches. In their first approach, they estimated how much lower the enterprises’ average current interest rate is than the average interest rate on similar debt issued by their potential competitors. On the basis of yield data, they estimated that the enterprises paid on average about 0.37 percent less on noncallable debt and about .63 percent less on callable debt from 1985 to 1994. They also made estimates for the more current 1991 to 1994 time period and using different A, double-A, and triple-A rated corporations as benchmarks. The estimated funding advantage on callable debt for the 1991 to 1994 period ranged from .8 to 1.06 percent. The enterprises’ interest rates, however, were higher than rates on U.S. Treasury debt. This difference suggests uncertainty in the market’s perception that the government is likely to rescue the enterprises if they failed. In the second approach, Ambrose and Warga evaluated how differences in cash flows and returns over time between debt and equity issued by the enterprises and other borrowers may have affected cost of capital differentials. They concluded, on the basis of this approach, that if the enterprises had to issue debt with characteristics similar to debt issued by potential A-rated competitors, their cost of funds would have increased by about 1.5 percentage points. Ambrose and Warga also compared average differences in investor yields between enterprise and private-label multiclass MBS. Enterprise MBS had average yields that were .27 to .37 percentage points lower than private-label MBS. Because the perception of an implied federal guarantee lowers the perceived risk of the enterprises’ debt securities and MBS, investors accept lower yields on all enterprise securities and permit the enterprises to operate with less equity than they would otherwise require. A good measure of equity adequacy is the ratio of equity to all assets—the sum of book assets and MBS. (Generally, the larger the ratio, the less the likelihood that operating losses will result in the failure of the entity.) In 1995, Fannie Mae’s ratio of equity to all assets was 1.3 percent, and Freddie Mac’s was 0.9 percent. (Freddie Mac’s lower ratio reflects the fact that less equity needs to be held against the risks of MBS.) These equity ratios are generally lower than ratios maintained by other financial institutions that deal in mortgages and MBS. As of December 1995, OFHEO required the enterprises to meet two different minimum equity ratios: the minimum ratio of equity to retained assets and the minimum ratio of equity to off-balance sheet assets. The minimum ratio of equity to retained assets, which includes mortgages held in portfolio, was 2.5 percent; the minimum ratio to off-balance sheet assets, which includes MBS, was .45 percent. The enterprises’ current ratios satisfy the minimums set by OFHEO. Analysts at the enterprises, mortgage market analysts at rating agencies, and private-label conduits told us high bond ratings are desirable since they indicate the firm imposes lower risks and investors will permit lower risk firms to pay lower interest rates on their debt. However, to obtain such ratings as fully private firms, these analysts generally told us that the enterprises would probably have to increase their equity levels. Privatization would eliminate the direct benefits conveyed by the enterprises’ federal charters. The most significant of the direct charter-based benefits is probably the exemption from state and local corporate income taxes. If the enterprises had paid state and local corporate income taxes at an average rate of 8 percent in 1995 and if no other costs, capital levels, or operating strategies had changed, we estimated that this would have resulted in a combined increase in expenses for the enterprises in the range of $367 million to $256 million, again depending upon the enterprises’ treatment of the increases in their federal tax returns. Expenses related to SEC registration fees, which the enterprises would also have to pay if privatized, would also be significant. If the enterprises had been required to register with SEC and pay fees in 1995 and if no other costs, capital levels, or operating strategies had changed, registration would likely have cost the enterprises SEC’s statutory fee of 3.4 basis points on each dollar of long-term debt, MBS, and CMO issued. The combined increase in expenses for the enterprises would have been in the range of $102 million to $72 million. The enterprises do not currently have to obtain ratings on their debt, MBS, and equity issuances from private rating firms. If they were privatized, they would need to obtain such ratings. We understand that rating fees average about 3 basis points (.03 percent) on issuances but are subject to substantial discounts for large issuers. Our calculations, however, do not include an estimate of the amount of fees that the enterprises might have to pay if privatized. Determining the cost advantage of using the Federal Reserve as a transfer agent is difficult. However, using the Federal Reserve could make enterprise securities more liquid and convenient investments than they would be otherwise. Such convenience could also lower MBS issuance costs. Just as with rating fees, we did not estimate such costs. These cost increases resulting from privatization would also likely have an adverse impact on the enterprises’ market shares, profits, and stock values. The magnitude of the effect would depend on the magnitude of the cost increase. In addition, certain expenditures, such as those for compensation, could decline. The combination of potentially higher funding costs, increases in other expenses, and opportunities to expand into new business areas associated with privatization could alter the enterprises’ operating strategies. The enterprises have noted that removal of their benefits and restrictions would lead them to change their operating strategies. An important determinant in the extent and type of behavioral change would be the effect of privatization on the enterprises’ funding costs. For example, if debt costs increase substantially as a result of privatization but MBS funding costs and mortgage interest rates go up by lesser amounts, the enterprises would have strong incentives to change both the amount of mortgages they fund and the way they fund mortgages. They might decide to hold fewer mortgages in portfolio and fund a larger proportion of mortgages by issuing MBS. This possibility is discussed in more detail in chapter 3. The markets’ perception of increased credit risk of enterprise securities could also lead the enterprises to change the terms under which they securitize mortgages. The MBS issued by the enterprises could come to more closely resemble those issued by private-label conduits. In addition, the elimination of charter restrictions would provide the enterprises with expanded opportunity in the areas of nonconforming mortgages and nonmortgage securitization as well as areas related to secondary mortgage market lending. If the markets perceived a decline in the creditworthiness of the enterprises as a result of privatization, one response the enterprises could choose would be to alter their MBS to more closely resemble those now issued by private-label conduits. Under the current structure, the enterprises insure the creditworthiness of their MBS. Without the benefit of the market perception of an implied federal guarantee of creditworthiness, investors could require the enterprises to deal more directly with credit risks in their MBS. The funding mechanism of current private label conduits in the jumbo market provides some information about how MBS might be structured with privatization. The enterprises provide credit enhancement for their MBS by requiring mortgage insurance on mortgages with loan-to-value ratios above 80 percent and fully insuring the remaining credit risk on most mortgages. The private-label conduits issue multiple-class MBS in which part of the credit risk is passed onto investors. Providing credit enhancements that limit the credit risk to investors is important to the marketability and liquidity of the MBS. Without their current charter restrictions, the enterprises would be allowed to enter the current jumbo mortgage market. They would also be allowed to engage in business activities that complement their existing businesses—for example, the proprietary information technology developed by the enterprises could lead to nonmortgage securitization and provision of automated financial transactions services. The residential mortgage market consists of a vertical stream of entities beginning with home buyers and mortgage originators and continuing with mortgage underwriters, insurers, conduits, and investors. Privatization would allow the enterprises to enter different vertical segments of the housing finance system, such as origination and mortgage insurance, that their charters now prevent them from entering. The enterprises compile extensive information on housing and mortgage markets, including home sales prices, housing ownership turnover, and flows of mortgage credit. Currently, private- label conduits, their mortgage banking subsidiaries, and other large mortgage banking businesses are developing products such as real estate appraisal services. The enterprises, private- label conduits, and many mortgage banking businesses have developed expertise in hedging interest rate risks associated with providing mortgage commitments before funding. The enterprises have also developed this expertise as it applies to funding long-term, fixed-rate mortgage products. One interesting possibility is that the enterprises, private-label conduits, large mortgage bankers, and other industry participants might vertically integrate or form networks, including firms specializing in different vertical stages of the process, to provide residential mortgage credit. If this were to occur, the resulting entities might develop large capacities for information retrieval and distribution to effectively compete in the mortgage markets as well as expertise in financial and risk management. These capacities could create synergies in related real estate activities and in nonhousing financial markets. Under privatization, the enterprises would not face the restrictions in their current charters that now prevent them from supplying these alternative services. In our analysis of the likely effects of privatization on the enterprises, we assumed that privatization would result in the reduction or elimination of the perception of an implied federal guarantee. While it appears that eliminating the benefits, restrictions, and obligations associated with the enterprises’ federal charters would be likely to at least reduce the markets’ perception of the implied guarantee, we recognize the uncertainty inherent in any attempt to predict the behavior of financial markets. To the extent that the markets do not perceive that the ties between the enterprises and the federal government are broken, the enterprises’ funding advantage may remain. In case little change in the funding advantage occurs, the primary effects of privatization would be to (1) raise some operating costs by eliminating the tax and SEC registration-related benefits that flow directly from the charter, and (2) free the enterprises to do business in new areas. In such a case, the enterprises could become even larger and generate even greater potential risk to the government should the government feel the need to rescue a failing enterprise that was “too big to fail.” The effect of privatization on the enterprises is difficult to predict. First, it is always difficult to predict with much precision how an organization will respond to changes in its environment whether from higher tax liabilities, higher interest costs, or reduced restrictions on its actions. Second, the most important effects depend on changes in market perceptions and the subsequent effect of those perceptions on the funding costs the enterprises would face. If the markets perceive the privatized enterprises’ securities as being riskier than the government-sponsored enterprises’ securities, they are likely to demand higher returns to pay for the greater perceived risk. This could cause the enterprises’ funding costs to rise significantly. The markets would also likely insist on greater capital to maintain a given credit rating. These increased funding costs and any resulting changes in enterprise behavior could bring about substantial change in the overall mortgage market. The enterprises could alter their behavior in a number of areas, including the amount of mortgage financing they do, the way they finance mortgages, and the way they deal with credit risk in their MBS. The potential effect under this scenario also depends on responses of other participants in the housing finance market, as discussed in the next chapter. On the other hand, if market perceptions do not change, and interest costs do not rise, the primary cost increases from privatization would come from SEC registration fees and state and local taxes. In this case, the cost increases that the enterprises would face may be minor in relation to the potential profitability from their increased business opportunities. Changes in the operating and marketing strategies of the enterprises—whatever the specific changes might be—could also affect behaviors of other industry participants. In oral comments on our draft report, Fannie Mae and Freddie Mac officials disagreed with our analysis of the financial benefits that government sponsorship provides to the enterprises and what they perceived as an implication that the benefits are derived by the enterprises rather than homebuyers. Fannie Mae officials said that the draft report did not provide sufficient context for the estimated range of financial benefits that government sponsorship provides to the enterprises. For example, the officials said the draft report implied that the benefits are derived by the enterprises rather than homebuyers. Generally, they did not think that it is meaningful to discuss charter-based benefits without discussing their restrictions, obligations, and what is passed thorough to borrowers. Although Fannie Mae officials also said that it is possible to estimate the value of government sponsorship, they said a more appropriate analysis would require a specific identification of who benefits from government sponsorship. Because the Fannie Mae officials believe that homebuyers are the primary beneficiaries of the financial benefits, they said we should have estimated the total value of lower mortgage interest rates to the American public. In addition, Fannie Mae officials said that the enterprises do not pay MBS yields; rather, they only guarantee the timely payment of MBS principal and interest in exchange for a guarantee fee. Therefore, the officials said Fannie Mae does not incur funding costs on MBS, so it would not incur additional costs of 5 to 35 basis points in the event of privatization. Freddie Mac officials also said that our estimates of the benefits associated with government sponsorship are high and that any financial benefits flow to homebuyers in the form of lower mortgage interest rates. The Freddie Mac officials further stated that we should use the aftertax estimates since a portion of any financial benefits is returned to the federal government in the form of income taxes. In addition, the Freddie Mac officials said that privatization would not eliminate the perception of the federal government’s implied guarantee to support the housing finance system. The officials said that the implied guarantee would remain because the federal government has supported a stable, low-cost housing system for 60 years, and the market would still believe that the government would take necessary steps to protect that system, including providing emergency financial support to significant participants in the mortgage finance system. Officials from both enterprises argued that the Ambrose and Warga study has fundamental flaws and that we should not have relied on it. Fannie Mae officials said that the high-end estimate of $8.9 billion in 1995 is excessive on its face because the enterprises had a combined beforetax income of only $4.6 billion that year: the estimated cost savings to the enterprises was about twice their beforetax profits. Moreover, the officials said that there were problems with Ambrose and Warga’s analysis of rate of return data. Although the Fannie Mae officials acknowledged that Ambrose and Warga’s study also found differences when using yield data, they said the report had so many flaws we should not use any of its findings to estimate the enterprises’ funding advantage. Freddie Mac officials also said that the Ambrose and Warga study suffers from substantial limitations and questioned our using it as a basis for estimating the enterprises’ funding advantage. The following summarizes their concerns regarding our use of the Ambrose and Warga study: They believe the weighted average cost of capital methodology is flawed and should not be relied upon in setting the top of our range of the funding advantage on debt. One official said Ambrose and Warga relied on debt return data from 1991 to 1994 to estimate that enterprise funding costs would rise 100 to 200 basis points in the event of privatization. However, the officials said that a similar analysis performed for the years 1985 to 1994 would have shown no difference in returns between enterprise bonds and bonds issued by other borrowers. One official said that bond yields interact over time, an econometric problem called serial correlation, and that this invalidates Ambrose and Warga’s estimates. Fannie Mae officials commented that the draft report’s discussion of the enterprises’ capital adequacy was misleading. Contrary to a statement in the draft report, they said that the ratio of equity capital to assets is not a good measure of the enterprises’ capital because they are a unique institution that faces risks that are different from depository institutions’ risks. In particular, the enterprises can hold relatively less capital against MBS since it presents lower risks than other types of assets. Moreover, Fannie Mae officials said the draft report failed to mention that OFHEO is developing risk-based capital standards to ensure the safety and soundness of the enterprises. These standards are intended to ensure that the enterprises will have adequate capital to protect against interest rate risks and other types of risks. We do not believe there is sufficient evidence to conclude that all of the benefits derived from government sponsorship flow through to homebuyers, an issue we address more completely in chapter 3. We have concluded, however, that if the enterprises were fully privatized and the perceived guarantee were reduced or eliminated, their funding costs would increase for both MBS and debt. Although in a strict accounting sense, the enterprises charge guarantee fees for guaranteeing the timely payment of principal and interest, the fees the market is willing to bear depend, in part, on how much higher mortgage interest rates are than the yield investors will accept for investing in MBS. Because of the perception of an implied guarantee, the market is willing to pay higher guarantee fees or accept a lower yield on GSE MBS than on private-label MBS. The use of beforetax or aftertax measures of the benefits derived from government sponsorship and therefore of the potential costs of privatization is spelled out in the report. The use of an aftertax measure is consistent with a case in which the enterprises would not be able to pass through any extra costs to homebuyers. Therefore, the use of an aftertax measure appears inconsistent with the enterprises’ view that existing benefits flow through to homebuyers and that eliminating those benefits would harm borrowers. On the issue of whether it is even feasible to eliminate the perception of the implied guarantee, we do not take a position. We assume that privatization would reduce, if not eliminate, investors’ perception, but we acknowledge the possibility that it may not occur and we discuss the implications in chapter 3. We did not base our estimates on the Ambrose and Warga study in its entirety. Rather, we relied on selected analyses from the study, after satisfying ourselves that those analyses were methodologically sound and appropriate for our use. For example, we relied on part of the study to calculate our ranges for the funding advantage on debt and MBS. The study is technical; therefore, use of their results required some technical judgments. In their first approach to analyzing interest rate spreads on debt, they make estimates using both yield and rate of return data. In prior written comments on the study (see p. 36), Fannie Mae objected to use of return data, largely because it measures both investor returns that are expected upon purchase and unanticipated changes in the value of the bond. We used the results from the yield rather than the return data, because yields are a better measure of expected returns at the time an investor buys a bond. Because bond characteristics differ between bonds issued by the enterprises and other issuers, and bond yields interact with one another over time (serial correlation), disentangling these effects can be difficult. Ambrose and Warga recognized how difficult their task was and qualified their results on the basis of the statistical complexities. We relied on their results for the mean yield spread between enterprise and others’ debt based on their approach using yield data in which they controlled for differences in bond characteristics such as maturity and age. They recognized that interactions between bond yields over time create serial correlation, a criticism cited by Freddie Mac. We recognize this problem but we also recognize that serial correlation affects only the precision of the estimates. The estimated mean yield spreads, which we relied on, are not biased. Because such estimates lack precision, however, we used a wide range for the funding advantage on debt. In their second approach, Ambrose and Warga use a weighted average cost of capital (WACC) approach to estimate cost of capital differences. Although we initially employed estimates derived using this approach, we have decided to base our estimates on the more straightforward approach based on yields. In our draft report the upper-range of our estimate for the funding advantage on debt was 120 basis points. We revised this upper-range to 106 basis points. Overall, we do not believe that the statistical estimation problems with the WACC approach or the acknowledged limits of the return-based approach provide sufficient basis to discard the authors’ results that were based on yield data. The Ambrose and Warga estimates based on yield data were higher for the 1991 to 1994 period than for the 1985 to 1990 period. In our view, this most likely reflects imprecision associated with such estimates, changes in the funding strategies of the enterprises, and/or changes in financial markets. We believe it reaffirms our position that a wide range of possible outcomes should be associated with privatization. Finally, analyzing the capital adequacy of the enterprises is a complicated and largely unanswered question. Our understanding based on past government studies, discussions with financial market analysts, and regulators is that each enterprise would likely require greater capital for its current activities if they were privatized. OFHEO is developing risk-based capital standards to help ensure the safety and soundness of the enterprises. If these standards require the enterprises to increase their capital levels, enterprise funding costs and mortgage interest rates could be affected. The exact effects of privatization on the residential mortgage markets cannot be determined with certainty, in part because of difficulty in knowing how the financial markets would respond to privatization. Our analysis of the effects of privatization on the residential mortgage markets is based on the assumption that privatization would eliminate or substantially reduce the perception of an implied federal guarantee of the enterprises’ financial obligations and increase the enterprises’ costs (as discussed in ch. 2). Under this assumption, privatization would likely lead to an increase in mortgage interest rates. Privatization would also likely lead to changes in behavior in the mortgage markets, particularly increased competition in the secondary mortgage market. The enterprises’ higher cost of funds would likely allow private conduits to compete with the enterprises in purchasing conforming mortgages. In purchasing mortgages, the enterprises may be unable to fully pass their increased funding and other costs to borrowers, since mortgages with other sources of funding would be available to borrowers. The enterprises would also be likely to charge fees more fully risk-based than their current fees; this would cause increases in mortgage interest rates to be greater for borrowers making smaller down payments. In addition, mortgage interest rates could fluctuate more than they have with the demand for mortgage credit. Due to the size and sophistication of the mortgage finance market, significant regional variations in interest rates seem an unlikely result of privatization. It is widely accepted that the enterprises, through portfolio investments and securitization, have generated many benefits to mortgage borrowers. These benefits include the reduction of regional disparities in interest rates and mortgage availability, spurring of innovations in mortgage standardization and transaction technology, and lowering of mortgage interest rates. The markets’ perception of the implied federal guarantee on the enterprises financial obligations plays an important—although not singular—role in enabling the enterprises’ to lower mortgage interest rates, in that the perception lowers the enterprises’ cost of funds. For this reason, the effect of privatization on mortgage interest rates depends critically on the extent to which privatization changes the market’s perception of the likelihood the government would come to the enterprises’ rescue. If privatization caused the market to change its perception of an implied tie with the government or substantially weakened it, investors are likely to demand a higher payment for the perceived increase in risk. The resulting higher cost of funds would lead to higher mortgage interest rates as the enterprises attempt to maintain their profits. However, the enterprises may not be able to fully pass on the higher cost of funds, because competition could increase in the conforming mortgage market. In a competitive market, cost savings, such as those realized by the special advantages granted to the enterprises, tend to flow through to consumers, in this case residential mortgage borrowers. When competition is limited, businesses can exercise what is often called market power. When such market power is exercised, cost savings are less likely to fully flow through to consumers, and businesses can realize higher profits. Such profits can accrue to stockholders, managers, employees, and others who provide goods and services to businesses possessing the market power. In this respect, privatization poses complicated policy questions. The fact that government sponsorship ensures the dominance of two chartered enterprises in the securitization of conventional, conforming mortgages produces some benefits such as greater market liquidity, but it may also produce costs due to lessened competition. If the enterprises currently possess and exercise market power, increasing effective competition would tend to cause more of the benefits of government sponsorship to flow through to borrowers. The extent of market power, however, is difficult to determine for a number of reasons. For our purposes, the most important difficulty is defining the relevant product market when alternative distribution systems deliver similar, yet differentiated products. For example, the enterprises state that the share of residential mortgages they have funded—about 30 percent—is too small to convey market power, so the benefits of government sponsorship flow through to borrowers. The study commissioned for this project to analyze the effect of privatization on the mortgage market defined the relevant market for purposes of determining market power as conventional, conforming mortgages securitized in the secondary mortgage market. The resulting duopoly—a market served by two suppliers—and other characteristics of the secondary market (for example, its offering of a fairly standardized product) led them to conclude that the enterprises “tacitly collude” and earn above average profits. They contend that government sponsorship introduces inefficiencies that privatization could eliminate. Because of insufficient statistical evidence, we do not know whether a broad or narrow product market definition is appropriate in determining the market power of the enterprises. Therefore, we cannot determine the enterprises’ market power or the potential benefits resulting from increased competition. If, under the current structure, the enterprises are not exercising market power and are passing most of the benefits from government sponsorship on to mortgage borrowers, increased competition may have little effect on mitigating the increase in mortgage interest rates in the conforming loan market that could result from privatization. However, if government sponsorship creates market power for the enterprises, conforming interest rates in the current environment may incorporate to some extent the extra profits resulting from the market power of the enterprises. Under this scenario, any increased competition resulting from privatization could provide the potential benefit of putting downward pressure on conforming mortgage rates. The likely increase in average mortgage interest rates is the broadest, most important market effect of privatization. The results of our analysis indicated that privatization could increase interest rates on fixed-rate, single-family housing mortgages below the conforming loan limits within an average range of about 15 to 35 basis points. Assuming that the interest rate increase does not cause a decline in house prices, the monthly payments of a borrower with a $100,000 thirty-year, fixed-rate mortgage would increase by $10 to $25. We use a $100,000 thirty-year fixed-rate mortgage to illustrate the increase in monthly payments because the average conventional, conforming loan amount for mortgages purchased by the enterprises is about $100,000. For $2 trillion in outstanding conventional conforming fixed-rate mortgages, the aggregate annual increase in mortgage payments would be in the neighborhood of $3 billion to $7 billion. Our estimate of the likely effect of privatization on fixed-rate, single-family mortgage rates is based on a multipart analysis. For a preliminary estimate of how much interest rates might rise with privatization, we first sought to determine the interest rate spread between conforming mortgages (those purchased by the enterprises) and jumbo mortgages (those purchased by private-label conduits). Realizing that the interest rate differential is influenced by some factors specific to government sponsorship (which we assumed would be eliminated through privatization) and some that are not, we sought to adjust the estimated spread, accounting for specific factors unrelated to the enterprises’ government sponsorship. The results of this work indicated that it would be reasonable to estimate that the conforming jumbo interest rate spread would be about 20 to 40 basis points. We next considered the need for one upward adjustment to account for the possibility of reduced liquidity and three downward adjustments. The three downward adjustments we considered were to account for (1) the geographic concentration of existing jumbo mortgages, which currently increases credit risk; (2) the possibility that the volatility of loan collateral for jumbo mortgages may exceed that of conforming mortgages; and (3) the likelihood of increased competition and operational efficiencies in the conforming and jumbo markets that could result from privatization. On the basis of this analysis, we estimate that privatization would probably increase average interest rates by about 15 to 35 basis points. Our primary information sources for the gross measure of the impact of privatization on mortgage interest rates included Freddie Mac and the Federal Housing Finance Board. Freddie Mac officials provided us with the interest rate spread between jumbo and conventional mortgage rates for 30-year, fixed-rate mortgages from their Primary Mortgage Market Survey for selected years between 1986 and 1995. The Survey asks mortgage lenders their current commitment rates for a loan with an 80 percent loan-to-value ratio on a monthly basis. Spreads were in the 35 to 55 basis points range in 1988, 1989, 1990, and 1992. Lower spreads ranging from 20 to 25 basis points occurred in 1986, 1993, and 1995. We also analyzed the interest rate spread for the years 1990 through 1994 using the Federal Housing Finance Board’s (FHFB) survey, Rates & Terms on Conventional Home Mortgages. The survey collects interest rates monthly on a sample of closed loans. We relied on spreads reported for fixed-rate loans. Average spreads were 18, 9, 11, and negative 2 basis points in 1990 through 1993, respectively. Reported spreads continued to be negative in most months in 1994. The Freddie Mac and FHFB data differ in certain respects. The Freddie Mac data do not provide information on mortgage interest rates for borrowers meeting any specific underwriting standard except for loan-to-value ratio. The FHFB survey reports average loan amount, loan-to-value ratio, and term; these averages are generally similar between conforming and jumbo loans. To estimate the interest rate differential created exclusively by the enterprises’ government sponsorship, we turned to a study commissioned for this project. This study analyzed the interest rate spread between conforming and jumbo mortgages by using individual loan level data. For the years 1989 through 1993, the statistical analysis standardized for many individual loan characteristics such as location and loan-to-value ratio.The results indicated interest rate spreads of about 40 basis points in California and 30 to 35 basis points in the other states studied for 1989 through 1991. The results for 1992 and 1993 found smaller spreads (in the 25 basis point range), and the results for California were similar to those in other states. For the last two quarters of 1993, the results indicated interest rate spreads of about 20 basis points. The study’s findings were similar to those of two previous studies employing the same methodology that found spreads in the 30 basis point range. The authors concluded, on the basis of the results over the entire period, that single-family, fixed-rate jumbo loans had interest rates that were generally 25 to 40 basis points higher than single-family, fixed-rate conforming loan rates, holding other characteristics constant. They concluded that a lowering of the conforming loan limit would likely result in an increase in mortgage interest rates in the lower part of the 25 to 40 basis point range for affected mortgages (i.e., those shifting from conforming to jumbo status), because liquidity in the jumbo market could increase from such expansion. The authors did not reach a numeric conclusion for the effects of privatization, largely because they did not know how much liquidity would be affected by privatization. Primarily on the basis of the results of the commissioned study and the other two studies employing similar methodology, and recognizing that the estimated spreads were volatile, we used 20 to 40 basis points as the estimated average spread between conforming and jumbo mortgages.This estimate served as our initial baseline approximation of how much interest rates would rise with privatization. As mentioned earlier, we considered four adjustments to the 20 to 40 basis point range—one upward and three downward—in determining the likely effect of privatization on mortgage interest rates. The upward adjustment was to account for the possibility of reduced liquidity. Officials of both enterprises emphasized the importance of this factor, but they also acknowledged the difficulties in measuring the liquidity effect. Officials from Freddie Mac stated that liquidity in a privatized market would tend to decrease most when mortgage originations were at their highest levels.We acknowledge that such an effect could result; however, it is our understanding that liquidity in the jumbo market over the past decade has generally been sufficient. Because the private-label conduits would likely expand and compete with the enterprises in the (current) conforming and jumbo markets with privatization, the share of conventional mortgages that would be securitized with privatization would likely exceed the current share of jumbo mortgages securitized. Such a development would contribute to a higher level of liquidity in the conventional market than exists now in the jumbo market. In summary, there is no convincing evidence that the upward adjustment for reduced liquidity should be significant. One of the general benefits from mortgage securitization that helps lower interest rates is regional diversification of credit risk. A limiting factor for the private-label conduits that securitize jumbo mortgages is that these loans tend to be concentrated in the northeast region of the country and the state of California. We discussed the impact of this factor with private-label issuers and credit rating agencies. One way they quantified this limiting factor was by relating it to the level of over-collateralization used for credit enhancement. The general consensus was that if a pool of jumbo mortgages that was geographically diversified could be backed by collateral equal to 103 percent of the security issue, a jumbo mortgage pool with similar characteristics but without such geographic diversification would require 106 to 108 percent collateral. Since such limits to diversification are not present in the conforming market and would not be present with privatization, the observed spread should be adjusted downward. We could not reach a precise statistical estimate of what the downward adjustment for regional diversification should be, but the information on over-collateralization supports a downward adjustment. In addition, the observed difference between the estimated interest rate spread between conforming and jumbo mortgages in California and other states for 1986 through 1991 is suggestive of a 5 to 10 basis point adjustment; the higher estimated spread in California is consistent with the large concentration of jumbo mortgages in that state. We adjusted the spread downward by 5 basis points to account for regional diversification. The conforming jumbo spread may require a downward adjustment due to the possibility that the volatility of loan collateral for jumbo mortgages may exceed that on conforming mortgages. Borrowers are more likely to default on their mortgage payments if the market value of their residences, the collateral for the loan, falls below the outstanding principal balance on their mortgage loans. One reason why default is more likely on mortgages with relatively high loan-to-value ratios is that relatively small local housing market downturns can trigger default. For any given loan-to-value ratio of a mortgage at the time of origination and the more volatile the price of the residence, the greater the probability of default. We obtained statistical evidence indicating that during the housing market downturn in the state of California, the percentage decline in house prices was greater for higher priced houses (that is, those with jumbo mortgages) than houses with values below the conforming loan limit. On the basis of our discussions with credit rating agencies, we understand that this is factored into the credit enhancement and pricing of jumbo, private-label MBS. Therefore, a downward adjustment in the estimated conforming jumbo spread, even if the estimate controls for the loan-to-value ratio, may be warranted. However, there is no convincing evidence that the downward adjustment should be significant over the period when interest rate spreads were estimated. Privatization would abolish charter restrictions on the enterprises that limit their ability to diversify into other markets and, more importantly, to vertically integrate throughout the different segments of this market, such as residential mortgages, to realize potential efficiencies. Privatization would also likely lead to entry into the current conforming market by existing private-label conduits and other potential entrants. These private label entities could better realize economies of large-scale securitization with privatization. We have already addressed how competitive factors could affect how much the benefits of government sponsorship are passed on to residential mortgage borrowers. Generally, these factors are reflected in the interest rate spread between conforming and jumbo mortgages, because interest rates in the conforming market are currently affected by government sponsorship. However, these potential improvements in operational efficiencies, resulting from increased competition, are not reflected in this interest rate spread, because interest rates in the jumbo market are not currently affected by the potential efficiencies that could result from privatization. Therefore, there is a rationale for a downward adjustment. However, there is no convincing evidence that the downward adjustment should be significant. From the studies we analyzed, it appears that a reasonable estimate of the conforming jumbo interest rate spread is currently about 20 to 40 basis points. Of the adjustments that need to be made to account for differences between the two markets, the most important appears to be the downward one for the potential gain in regional diversification of credit risk. There is no convincing evidence that the other adjustments should be significant; we assume that the upward adjustment for liquidity does not exceed the combination of the downward adjustments for the higher volatility of jumbo collateral and the effect of operational efficiencies from increased competition during most common mortgage market conditions. This conclusion is largely based on observed liquidity in the jumbo market, observed substitutions by mortgage borrowers and lenders between fixed- and variable-rate mortgages, and Hermalin and Jaffee’s analysis of liquidity in the private label market. Assuming that the sum of the liquidity, house price volatility, and competition adjustments are a wash or near-wash, the estimated interest rate spread could be adjusted downward by 5 to 10 basis points for regional diversification benefits resulting from privatization. Applying this assumption, we adjusted the estimated interest rate spread of 20 to 40 basis points downward by 5 basis points. From this, we concluded that privatization would probably increase average interest rates within an average range of about 15 to 35 basis points. According to the enterprises’ officials, the enterprises take account of credit risk in their treatment of the mortgages they purchase, all of which must meet their underwriting standards. For example, the enterprises share some credit risk with private mortgage insurers and generally require more mortgage insurance on mortgages with loan-to-value ratios above 85 percent. Both the enterprises and private-label conduits charge guarantee fees for insuring the timely payment of principal and interest on their MBS. The private-label conduits charge risk-based guarantee fees. Although the enterprises have policies consistent with risk-based fees, both the officials from the enterprises and other mortgage industry participants told us that the enterprises do not charge fees that are fully risk-based. Because privatization would likely increase the number of secondary market competitors and change the missions of the enterprises, it would probably motivate the enterprises to implement fully risk-based fee structures. For this reason, the increase in mortgage interest rates associated with privatization would likely be relatively higher for borrowers making small down payments and relatively smaller for borrowers making larger down payments. As discussed more fully in chapter 4, one of the negatively affected groups would be first-time homebuyers, who tend to make relatively small down payments. Officials from both enterprises told us that primary and secondary mortgage market liquidity would suffer with privatization, largely because of the loss of the perceived guarantee of enterprise MBS. In addition, the enterprises’ increased borrowing costs could sharply curtail or eliminate portfolio lending by the enterprises. Officials from Fannie Mae emphasized that this decline in funding from retained portfolio would reduce liquidity. This could result in less liquidity generally, for particular mortgage products, or for specific geographic markets during different parts of the economic cycle, because the enterprises would not necessarily step into the market to buy products whose price were falling. Officials from Freddie Mac emphasized that the impact of privatization could not only raise the average cost of financial capital to fund mortgages but could also raise it more in periods of high demand for mortgage credit. Neither we nor the enterprises have quantified this liquidity effect of privatization or estimated how much it would affect the mortgage interest rate increase. One reason for the liquidity of the enterprises’ securities is that regulatory guidelines governing concentration of any one issuer’s securities in the portfolios of investors such as insurance companies and depository institutions do not generally apply to securities issued by the enterprises, because they are considered relatively low-risk government agency securities. If privatization eliminates this agency status, many large mortgage investors, including depositories, would likely have concentration limits on how much they could invest in each of the now private conduits’ securities. With privatization it is possible that a relative scarcity of investors willing to accept private credit enhancements of securities that were no longer perceived to have government backing could develop during periods with high demand for mortgage credit. However, as stated earlier, we have found no statistical evidence that privatization would result in a substantial reduction of liquidity in the secondary mortgage market. As a result, mortgage interest rates could fluctuate more than they currently do with demand for mortgage credit, but the extent of such additional fluctuations is unknown. Before the creation of the enterprises, mortgages were funded by depositories that primarily served local markets; this created regional disparities in mortgage interest rates, resulting from regional differences in the demand for and supply of mortgage credit. The enterprises established a valuable secondary market mechanism that enabled financial capital to flow to geographic areas with the greatest demand for mortgage credit. This free flow of capital tended to equalize interest rates across regions on mortgages with similar risk characteristics. Privatization is not likely to result in a return to a mortgage market dominated by depositories holding mortgages in portfolio because of the continuance of existing mechanisms (including the private-label market) and tools to promote securitization, which the enterprises fostered. On the other hand, the enterprises’ levels of mortgage funding could decrease, and we cannot be certain of the extent to which other entities would be likely to “make up” this decrease in funding. The possibility, with privatization, of a decline in the level of mortgage funding by the secondary market raises the question, however, of how much securitization and capital mobility to fund mortgages are necessary to offset potential regional interest rate disparities on mortgages with similar risk characteristics. To determine the likelihood that privatization would result in regional interest rate disparities, we sought to determine the relationship between the growth of the secondary mortgage market and regional interest rate disparities. First, we analyzed regional interest rate differentials (the difference between the highest and the lowest regional mortgage interest rate) based on data for the years 1980 through 1993 that Freddie Mac officials provided from their Primary Mortgage Market Survey. It is important to note that credit risk variables excluded from the data can create part of the interest rate differentials. The regional interest rate differential declined from 100 basis points in 1980 to less than 20 basis points since 1988. This showed that interest rate disparities had lessened substantially over time. However, the data did not show that the reduction in regional interest rate disparities was due only to greater secondary market activity, because other variables could have influenced regional mortgage interest rates. Nonetheless, Freddie Mac officials attributed this decline to the growth of secondary mortgage markets created by the enterprises. Evidence presented in a study by Jud and Epley using statistics for the years 1984 through 1987 indicated that after adjustments for loan characteristic factors that affect interest rate differentials, no significant regional differences remained in mortgage interest rates. This evidence is consistent with the hypothesis that the substantial development of the secondary market, facilitated by government sponsorship, helped eliminate the regional interest rate disparities that had existed before 1984. The result that significant regional disparities were all but eliminated even when the enterprises were much smaller than they currently are is also consistent with the idea that the elimination of this disparity did not require the enterprises to be as large as they are today. This result, plus the growth and importance of private-label conduits leads us to the conclusion that significant regional interest rate disparities on mortgages with similar risk characteristics are not likely to reappear with privatization. Potential regional disparities in interest rates are also relevant to analyzing the importance of the enterprises’ operating “in all markets at all times.” Generally, mortgage lenders may be motivated to tighten borrowing standards or charge higher fees in local markets where housing prices are declining. Such behavior is consistent with risk-based fee structures. Officials from Fannie Mae told us that their charter and mission require them to operate in all markets at all times. They said that one benefit of this requirement is that they serve as a cushion in markets experiencing economic decline. As an example, they stated that they continued to operate in and serve the housing market in Texas throughout the economic decline in the middle 1980s. If Fannie Mae does not restrict credit to regions undergoing recessions while other providers of credit do, Fannie Mae purchases should represent a higher share of mortgage originations in years when a region is in recession. We received annual data on Fannie Mae’s market shares and housing price index for the years 1980 through 1994 for the states of Texas, Louisiana, Oklahoma, Colorado, California, and Alaska and the New England region. We agree with Fannie Mae officials in the statement that many factors affect the level of participation of Fannie Mae and other lenders in any year. We analyzed year-to-year correlations between Fannie Mae’s share and the housing price index and found no evidence that Fannie Mae provides a cushion during downturns. However, a Fannie Mae official aggregated data across years and said the results provided evidence that Fannie Mae provides a cushion. While aggregating statistics across years can be appropriate for analyzing long-term trends in economic variables such as funding levels and interest rate spreads, we question how appropriate such aggregation is for analyzing cyclical trends. On balance, we did not find sufficient evidence to determine whether or not Fannie Mae provides a cushion during housing market downturns in specific regions. The continued market presence of the enterprises in all geographic markets nationwide has helped to eliminate regional disparities in mortgage interest rates and may provide a cushion for local housing markets experiencing an economic downturn. Other financial institutions that fund mortgages and mortgage insurance include those that operate in specific geographic areas and base funding decisions, including decisions on pricing and geographic limitations, on expected profitability of each product in each geographic market. Privatization would likely motivate the enterprises to adopt funding decisions based on criteria more similar to those of other financial corporations. In addition, the potential increase in secondary market competition would reinforce this change in business behavior. Even so, we conclude that significant regional disparities in mortgage interest rates are unlikely to occur with privatization, because securitization activity should provide sufficient capital mobility across regions. Also, we do not think that privatization would eliminate any substantial stabilizing mechanism for local housing markets with declining market prices. In large part, this is because we found little evidence that such mechanisms still require government sponsorship to function effectively. Currently, conventional mortgages are funded by the enterprises and depositories, while private-label conduits operate primarily in the nonconforming market. Virtually all conventional mortgages were funded by depositories before the enterprises existed. However, for a number of reasons, privatization would not likely cause a return to this earlier environment. One reason is the existence of private-label conduits, which were in their infancy in the latter half of the 1980s. Their development is largely attributable to two related factors: (1) the standardization and technological innovation spurred by the enterprises and (2) the general improvement in financial and information technology in the economy. Private-label conduits, which currently specialize in nonconforming mortgages (mostly in the jumbo market), accounted for approximately 18 percent of combined Fannie Mae, Freddie Mac, and private-label MBS outstanding and 13 percent of total MBS outstanding as of September 1995. If privatization were to lead to the enterprises’ loss of both their direct and indirect benefits—especially their funding advantage, this would allow private-label conduits to operate on a more level playing field with the enterprises in the conforming market. Because privatization is likely to remove many, if not all, of the enterprises’ restrictions, the enterprises are likely to take the opportunity to operate in the current jumbo market along with the other conduits. Should the enterprises’ cost of funds rise from privatization, it is likely that the overall amount of mortgage funding they provide, whether out of retained portfolio or as MBS, would decline. However, if the overall level of mortgage interest rates in the unified (post-privatization) mortgage market rises, there would be incentives generated for increased funding by private-label conduits in the conventional market. If this increased funding occurs, it should partially offset the enterprises’ reduced funding. To compete successfully in this new privatized market, it may be necessary for any conduit to be a large organization. First, it appears that there are financial and technological cost efficiencies in the securitization process from operating on a large scale. Second, such conduits would need regionally diversified loan pools to keep the costs of their risks at a competitive level. Third, there may be both incentives to and additional advantages from innovation for firms that are a significant part of the mortgage market. For example, it may improve efficiency and profitability to vertically integrate or form networks within the housing finance system. This could lead to further improvements in technology and advantages from information sharing. As a result, we would not anticipate that a large number of major firms would compete in this market. While the possibility of additional competition in the housing finance market could be a spur to increased innovation, the possibility that the enterprises could lose their dominant position may reduce their incentives to innovate. As government-sponsored enterprises, Fannie Mae and Freddie Mac currently have cost advantages (mostly funding) over potential rivals in the development of efficiency generating innovations. In addition, their cost advantages may have sheltered them from potential competitors in the secondary market. Because of their market dominance, the financial returns from developing innovations are likely to accrue to the enterprises rather than to a multitude of competitors. To the extent the enterprises’ market share declines, privatization could cause the enterprises to innovate less. The incentives to innovate by other market participants, however, would increase with privatization. For example, our discussions with industry participants and experts indicated that large mortgage bankers would be more likely to develop automated underwriting, appraisal, and mortgage servicing innovations if the enterprises were privatized. Because of these offsetting incentives, the net effect on the overall level of innovation is impossible to predict. An increase in the ability of private-label conduits to diversify credit risks across a wider range of housing prices and geographic locations could facilitate their expansion and could be a determining factor in whether and to what extent these conduits would be able to replace the expected decline in funding by the enterprises. As with many financial products, credit enhancement mechanisms, such as pool insurance and parent guarantees, have evolved over time. To the extent this evolution takes advantage of enhanced efficiencies, it is more likely to improve the overall functioning of the mortgage market. The recent development of private-label MBS has motivated development of credit enhancement mechanisms by issuers and underwriters. Privatization could motivate even greater development. One of the major uncertainties associated with privatization, however, is how well market participants can develop credit enhancement mechanisms that can provide the assurances required by a wide range of mortgage investors. This uncertainty complicates the task of estimating the growth of private-label conduits with privatization of the enterprises. Competition between the enterprises and private-label conduits is unlikely to fully offset the overall reduced availability of secondary mortgage market financing that would likely result from the enterprises’ increased funding costs. To some extent, the need for secondary mortgage market financing would also likely be less, because the increased profit potential of mortgages resulting from the expected rise in mortgage interest rates could induce some banks and thrifts to hold more of the mortgages they originate in portfolio rather than to sell them in the secondary market. To offset the interest rate risk associated with fixed-rate mortgages, these banks and thrifts could also be induced to originate more variable-rate mortgages. Such mortgages are not sold as frequently as fixed-rate mortgages in the secondary market. If banks and thrifts would hold more of the mortgages they originate in portfolio, it could lead to depositories’ greater use of Federal Home Loan Bank (FHLB) System advances. Data show the depositories’ increased use of variable rate mortgages. Before the thrift crisis in the late 1980s, depositories tended to originate long-term, fixed-rate mortgages funded by short-term liabilities. About 6 percent of all mortgage holdings by thrifts were variable rate in 1980. In 1993, about 47 percent of all jumbo mortgage originations were variable rate; further, about two-thirds of all mortgage holdings by thrifts and nearly 40 percent by commercial banks were variable rate as of June 1995. Unlike fixed-rate mortgages, variable-rate mortgages tend to be funded by depositories rather than securitized, because they can be held in portfolio with less interest rate risk. In 1993, less than half of all jumbo originations—45 percent—were securitized, compared to nearly 60 percent of conforming mortgage originations. However, with privatization, to the extent that private-label conduits would be better able to diversify risks geographically, the share of mortgages that are securitized is likely to be greater than that in the current jumbo mortgage market, although possibly smaller than that currently observed in the conventional market. Privatization would likely change the behavior of market participants and increase average interest rates on fixed-rate, single-family mortgages within an average range of about 15 to 35 basis points. However, privatization would not mean the end of the secondary mortgage market, a return to regional disparities in mortgage interest rates that were not based on differences in risk, or a lack of mortgage credit in the economy during parts of the business cycle. It would probably mean that mortgage rates would increase in areas with higher risks, for houses with higher loan-to-value ratios, and in periods of high mortgage demand. In oral comments, Fannie Mae and Freddie Mac officials disputed several issues included in the draft version of this chapter. The officials said that privatization would result in higher mortgage interest rates than stated in the draft, and Fannie Mae officials said they did not fully understand the methodology we used to estimate the potential mortgage rate increase. Enterprise officials also disagreed with statements in the draft that they said implied the housing markets may lack competition and that the enterprises exercise market power. Moreover, enterprise officials said that privatization would generate significant regional variations in mortgage costs, and they disagreed with our contention that there is no sufficient evidence for concluding that the enterprises provide a cushion during housing market downturns in specific regions. In addition, Freddie Mac officials said that the increased use of adjustable-rate mortgages (a form of variable-rate mortgage) would result in higher mortgage foreclosure rates. Fannie Mae officials said that privatization would likely raise mortgage interest rates more than the 15 to 35 basis points estimated in the draft report. They said that one reason for this disagreement is that we did not adequately consider the impact that privatization would have on the liquidity of the home financing system. On the basis of discussions with private sector jumbo MBS traders who were asked to list periods of illiquidity, a Fannie Mae official said that the traders listed three periods of illiquidity over the past decade. The traders told the Fannie Mae official that increasing interest rates in 1994, combined with observed differences in jumbo prepayment speeds by issuers, led to a period during which pricing existing jumbo securities became extremely difficult. Because the jumbo market has experienced such periods of illiquidity, the Fannie Mae officials said it is not unreasonable to predict that the larger mortgage market would experience similar illiquid periods and higher mortgage rates in the event of privatization. In addition, they thought that greater use of private-label credit enhancements would result in higher mortgage rates. They did not, however, predict the potential impact of reduced liquidity on mortgage interest rates. Freddie Mac officials said that mortgage rates would increase by more than 15 to 35 basis points; in fact, they predicted an increase of 55 to 86 basis points. The officials said that the spread between conforming and jumbo rates ranged from 11 to 70 basis points between 1986 and 1996, with a mean spread of 43 basis points. They stated that several factors resulting from privatization would cause interest rates to increase by 55 to 86 basis points. For example, they said that in the event of privatization, private-label issuers would have to increase the volume of subordinated securities by 500 percent to replace the role of the enterprises. Freddie Mac estimated that this change alone would add 25 basis points to the estimated increase in mortgage rates. In addition, they said that the commercial mortgage market, in which Freddie Mac does not participate, experiences substantial periods of illiquidity. Fannie Mae officials also said that we did not clarify our methodology for estimating the spread between conforming and jumbo loans prior to adjustments; we estimated a spread of 20 to 40 basis points before adjustments. The Fannie Mae officials said that the Cotterman and Pierce paper estimated a spread of 25 to 40 basis points between conforming and jumbo loans and could not understand why we used an estimated range of 20 to 40 basis points. A Fannie Mae official also said that there is no evidence that the enterprises exercise market power, and that the secondary market for conforming loans is not a relevant market for analyzing market power. Therefore, there is no meaningful duopoly consisting of Fannie Mae and Freddie Mac. The enterprises are participants in the mortgage financing market along with many other players, such as banks and insurance companies, that also buy and sell mortgages. Additionally, the Fannie Mae official stated that there were substantial flaws in the Hermalin and Jaffee paper which contended that the enterprises tacitly collude. For example, he said the authors reviewed data only from 1989 to 1993 when an analysis of 1985 to 1995 would have produced contrary results. The Fannie Mae official also said that Hermalin and Jaffee ignored evidence that shows, on a monthly basis, that the market share data of Fannie Mae and Freddie Mac are quite volatile. He cited this as evidence that the enterprises do not engage in tacit collusion. Freddie Mac officials stated that there is no evidence of a lack of competition in the mortgage markets. They said there is no basis for excluding all firms that buy and sell mortgages from the definition of the relevant market. Further, the Freddie Mac officials stated that the guarantee fees the enterprises charge for securitization services have declined since the early 1980s. They said that declining fees are inconsistent with arguments that the enterprises exercise market power. The Freddie Mac officials also reemphasized comments they made on the chapter 2 draft that the financial benefits of government sponsorship flow to homebuyers in the form of lower interest rates and are not retained by the enterprises. Fannie Mae officials also disagreed with an assertion in the draft report that privatization would not result in significant regional variations in mortgage interest rates. The officials said the report acknowledged that privatization would result in risk-based pricing: for example, homebuyers making relatively low down payments would pay higher mortgage rates and fees. The Fannie Mae officials said they could not understand why the draft report did not seem to consider the possibility that with privatization, specific regions of the country experiencing economic downturns would also experience relatively higher mortgage costs. The Fannie Mae officials said that this “risk premium” would probably become permanent in regions of the country that are perceived to have volatile home prices. The officials said this contrasts sharply with the current conforming mortgage market where lenders nationwide can get the same posted cash price for loans and homebuyers nationwide have access to the same rates. Freddie Mac officials also said that privatization would result in significant regional variations in mortgage interest rates. For example, they said that the regional variations observed in today’s jumbo mortgage market would likely be replicated in the larger mortgage market. Freddie Mac officials also said that evidence from regions of the country that have suffered economic downturns in recent years, such as New England, indicate that lenders and borrowers in these areas experience disparities in the cost and availability of credit. Fannie Mae officials also said the draft report ignored substantial evidence that the enterprises currently provide a substantial “cushion” to the housing markets regions of the country experiencing economic downturns. For example, the officials said that the enterprises’ market share increased in such regions during economic downturns. The officials also found that there was a significant negative correlation between changes in the housing price index and Fannie Mae’s market share in California and New England between 1984 and 1994. In other words, when housing prices declined in these areas, Fannie Mae’s market share tended to increase, which officials said demonstrates the regional cushion. Freddie Mac officials disputed the draft report’s analysis of correlated annual data for 1980 to 1994, by state, on Fannie Mae’s market share and a house price index; this analysis found “no evidence” that Fannie Mae provided a regional cushion. The officials said that including early 1980s data ignores substantial changes in the secondary market that occurred during those years. The officials said that the data for the early 1980s is skewed because the enterprises dramatically increased their mortgage purchase volume during those years, particularly as a result of the introduction of the Guarantor and Swap program in 1981 and CMOs in 1983. The officials said that changing the beginning of the sample period from 1980 into 1985 changes the results. The officials stated that such an adjustment showed, for example, a strong negative correlation between declining house prices and Freddie Mac’s market share in three states that experienced substantial economic downturns: California, New York, and Texas. In addition, Freddie Mac officials said that the expected increase in adjustable-rate mortgages at the expense of fixed-rate mortgages would result in more mortgage foreclosures. The Freddie Mac officials provided data on Freddie Mac purchased mortgages that show the foreclosure rate on adjustable-rate mortgages between 1990 and 1995 was at least twice the foreclosure rate on fixed-rate mortgages, even though adjustable-rate mortgages have higher down payment requirements. We explained how important the enterprises are to the housing markets and we analyze the connection between the benefits conferred on that market through the enterprises and benefits received by households. We do not, however, believe that we can state how much of the benefits generated flow to households. Nor can we say exactly how privatization would affect the housing market. Even so, we made a change to our draft report to address the enterprises’ concerns that we did not provide an overall measure of the effects of lower interest rates on the mortgage market as a whole. Using an estimate provided by Freddie Mac for the outstanding value of conforming, conventional, fixed-rate mortgages, we calculated the total benefit as ranging from $3 billion to $7 billion. We also clarified how we derived the spread between jumbo and conforming fixed-rate mortgages. We also added more precise language to indicate that we would not expect significant regional variations in mortgage costs across regions on mortgages with similar risk characteristics. We have included information provided by a Fannie Mae official on temporary disruptions in liquidity in the jumbo market. The official did not know how serious these disruptions were. We continue to conclude that the share of conventional mortgages that would be securitized with privatization would likely exceed the current share of jumbo mortgages securitized, and such a development would contribute to a higher level of liquidity in the conventional market with privatization than exists now in the jumbo market. Our discussion indicates that we placed more emphasis on studies such as the commissioned one by Cotterman and Pearce that use individual loan level data and control for loan characteristics than we did on other data sources reporting the interest rate spread between jumbo and conforming mortgages. Officials from neither Fannie Mae nor Freddie Mac criticized the Cotterman and Pearce study. Freddie Mac estimated the spread using a data source different from the one they had originally used and provided to us when we met in the course of this assignment. Both data sources are based on telephone surveys of lenders. We cannot determine why the spreads they now report are larger than those they reported previously, but we continue to rely primarily on the studies by Cotterman and Pearce and the other two studies employing similar methodology. Freddie Mac officials adjusted this spread upward by 25 basis points, on the basis of their estimate of the effect on rates of an increase in the use of subordinated securities used to finance many private mortgage pools. We did not make such an adjustment because, in our view, it is likely that interest rate spreads between jumbo and conforming mortgages already reflect the impact of subordinated securities on jumbo mortgages. Finally, we do not see how the commercial mortgage market, a market in which loan underwriting decisions and standardization are very different from the single-family residential mortgage market, provides reliable information on the level of liquidity that could result from privatization of the enterprises. Our draft report did not take a position on whether the enterprises do or do not have market power, because we could not, from our analysis of the data, make such a determination. While the enterprises would like us to conclude that they do not exercise market power, we continue to conclude that there is insufficient statistical evidence to reach such a conclusion. Both enterprises emphasize that they compete vigorously both with each other and with depository institutions. We think this evidence is insufficient to conclude an absence of market power, because depository institutions fund a higher share of variable-rate mortgages while the enterprises fund relatively more fixed-rate mortgages. These products have differing characteristics, and their competitive impacts on one another depend on how highly substitutable they are to borrowers. Both enterprises also criticize the commissioned study by Hermalin and Jaffee, stating that the study does not consider how monthly shares of secondary market purchases fluctuate between the enterprises. Hermalin and Jaffee attributed the stability of annual shares to their finding that the enterprises are tacitly colluding duopolists in the (narrowly defined) secondary mortgage market for conforming loans. In competitive markets the process of rewarding the relative efficiency of one or more sellers tends to create instable market shares measured over long periods of time. The evidence the enterprises presented showing market shares that fluctuated was based on monthly data and we believe it could just reflect random or seasonal fluctuations in mortgage originations that affect each enterprise differently (e.g., because the regional distribution of their mortgages differ). Finally, Freddie Mac officials argued that the general decline in guarantee fees by both enterprises since 1985 indicates a competitive market where all of the benefits to the enterprises flow through to borrowers. The data provided by Freddie Mac show that fees have declined, but they do not show whether they are high or low compared with a competitive market. The competitive process the enterprises have described was largely in place in the 1980s, when fees were higher. Thus, the decline in fees reflects either cost changes or an increase in competition or potential competition. The private-label conduits, in their infancy in the middle 1980s, may have provided a source of potential competition. CBO emphasized the possible impact of potential competition on the enterprises when it stated: “Some empirical evidence suggests that the GSEs may not have priced their services at fully competitive levels in the 1980s.” Even if there were evidence of some increased competition from private-label conduits or other sources, we still do not know whether the market is competitive enough to cause all or a large part of the benefits from government sponsorship to flow through to households with mortgages. After considering the enterprises’ comments, we clarified our discussion to indicate that we do not think privatization would lead to significant regional disparities in mortgage interest rates that were not based on risk differences. However, we did not change our overall conclusion that privatization is not likely to significantly reduce capital mobility across regions. We analyzed year-to-year correlations between Fannie Mae’s share of originations and their housing price index in the states of Texas, Louisiana, Oklahoma, Colorado, California, and Alaska, and in the New England region. A negative correlation indicates that Fannie Mae could be providing a cushion in declining markets. When we did the analysis using data for the 1984 to 1994 period, as suggested by enterprise officials, we found negative correlations in Texas and Oklahoma and positive correlations in the remaining areas. We also reanalyzed the data for the 1984 to 1994 period by estimating correlations between changes in Fannie Mae’s share and changes in the housing price index. In addition to Texas and Oklahoma, the correlation for Colorado was also negative. These results are also consistent with our original conclusion that the evidence is ambiguous. Finally, we have no evidence on what effects privatization would have on foreclosure rates. We have no basis to evaluate the various factors that may be associated with foreclosure rates on adjustable-rate mortgages purchased by Freddie Mac. Privatization would likely remove one of the federal mechanisms for channeling residential mortgage funding to those borrowers and geographic areas that lawmakers have deemed worthy of special consideration (targeted groups). In our review of the enterprises’ activities that were designed to meet their social goal obligations as established by HUD, we found little definitive evidence of how housing affordability and homeownership opportunities for targeted groups would be affected by privatization. The effects on targeted groups of eliminating the enterprises’ social goal obligations are uncertain for three primary reasons. First, the effects would depend largely upon whether other federal mechanisms that support housing affordability and homeownership are maintained or expanded after privatization and the impacts of those mechanisms. Second, it is difficult to judge whether and how well the enterprises have achieved their goals, because 1993 was the first year for which the enterprises provided HUD the data necessary to monitor the amount of funding provided to targeted groups under HUD’s interim goals, and the permanent goals HUD has recently promulgated have a new measure of underserved areas. Third, neither we nor the enterprises were able to quantify the impacts of the enterprises’ social goal efforts on housing affordability. Assuming that privatization leads to the elimination of the enterprises’ social goal requirements without any change in other government mechanisms, the likely increase in mortgage interest rates for single-family housing (the broad market effects discussed in chapter 3) would make homeownership less affordable. In particular, the increase in mortgage interest rates could cause a delay in homeownership, primarily for young households with low but rising incomes. Because the enterprises play such a small role in the multifamily housing market, it is unlikely that privatization would have a significant effect on mortgage interest rates for multifamily housing or on housing affordability for residents of such rental housing. Privatization would likely eliminate one of the federal government’s means of channeling residential mortgage credit to borrowers and geographic areas that lawmakers have designated for special consideration. More specifically, privatization would likely eliminate the enterprises’ affirmative obligations as set forth by The Housing and Community Development Act of 1992 (the 1992 Act): “to facilitate the financing of affordable housing for low- and moderate-income families in a manner consistent with overall public purposes, while maintaining a strong financial condition and a reasonable economic return.” If the enterprises were privatized, HUD’s regulation of the enterprises to achieve social goals would likely have to be eliminated for the following reasons: The enterprises would have new charters that would eliminate both privileges and restrictions specific to their housing finance missions, and the social goals are now an integral part of this overall organization. If social goal requirements remained, the financial marketplace might continue to perceive an implied federal guarantee for the enterprises. If the enterprises continued to face social goal requirements and the new competitors that entered the secondary market did not, there would not be a level playing field among the secondary market entities. We discussed one option that would continue HUD’s social goal regulation of the enterprises with HUD officials. It would involve retaining some social goal regulation of the enterprises because of possible residual advantages they would still have due to the period of government sponsorship. This issue is related to whether the enterprises should pay some sort of exit fee (directly or indirectly in the form of social goal requirements) upon privatization for benefits received during the period of government sponsorship. However, based on our discussions with industry participants and regulators, it seems likely that social goal regulation of the enterprises by HUD would not continue following privatization. As discussed in chapter 5, if Congress decides to privatize, it could be important to convince the markets that links between the enterprises and the government are broken, in order to change investors’ perceptions about any implied guarantee. It could be harder to convince the markets if some residual social goals remained for the privatized entities. In our review of the enterprises’ activities to meet social goal requirements, we found little definitive evidence of how housing affordability and homeownership opportunities for targeted groups would be affected by privatization. Fannie Mae has devoted extensive resources to special programs to meet social goal requirements and help fulfill its housing mission. Freddie Mac has devoted extensive resources to pilot programs and related activities, such as its Underwriting Barriers Outreach Group program, to expand housing opportunities both generally and for underserved areas and groups. However, quantification of the enterprises’ efforts at the time of our review was generally a measurement of resource commitments rather than outcomes. As discussed in chapter 1, two of the statutory purposes of the enterprises are to provide ongoing assistance to the secondary market for residential mortgages (including activities relating to mortgages on housing for low- and moderate-income families involving a reasonable economic return that may be less than the return earned on other activities) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing; and to promote access to mortgage credit throughout the nation (including central cities, rural areas, and underserved areas) by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. The 1992 Act required HUD to promulgate rules that set forth goals for the enterprises to meet in purchasing mortgages made to designated income groups and in geographic areas defined as underserved. The motivation for promulgation and enforcement of the social goals was partially attributed by individuals we interviewed, to the perception that the enterprises’ distribution of conventional, conforming loan funding going to low- and moderate-income borrowers was lagging behind the primary mortgage market’s funding of such mortgages. A Federal Reserve Board study using 1992 Home Mortgage Disclosure Act data supported this perception. The purpose of the goals is to increase the total supply of residential mortgage funds to targeted borrowers, which in turn could reduce mortgage costs for such borrowers. The impact on mortgage costs depends on how much the social goals serve to increase enterprise funding levels to targeted borrowers and how mortgage originations by other lenders (namely depository institutions that undertake portfolio lending and mortgage bankers who originate federally insured mortgages for Ginnie Mae mortgage pools) are affected. It is easier to quantify how the social goals affect enterprise activities than it is to quantify the final market outcomes of such activities. The broad purposes of the 1992 Act do not answer a number of questions about legislative expectations of HUD and the enterprises in their implementation of these social goal requirements. For example: Should the enterprises’ promotion of access to mortgage credit throughout the nation provide remedies to alleviate possible imperfections in private mortgage markets such as those created by racial discrimination? Or, should the enterprises improve the distribution of investment capital using some different standards? Should HUD promulgate separate subgoals for central cities and rural areas, or specify one or more geographic areas that it considers underserved? The 1992 Act directed HUD to promulgate regulations setting annual goals for each enterprise for the purchase of mortgages relating to each of the following three categories: housing for low- and moderate-income families; housing located in central cities, rural areas, and other underserved areas; and rental and owner-occupied housing for low-income families in low-income areas and for very low-income families. These goals were set in part to bolster HUD’s monitoring and enforcement of goals for both enterprises that previously had been established only for Fannie Mae. The 1992 Act established a transition period of calendar years 1993 and 1994 to allow time for HUD to collect data and implement these requirements and provided interim annual purchase goals for each enterprise during the period. Under these goals, 30 percent of the total number of dwelling units financed by mortgage purchases of each enterprise during the year were to be from mortgages serving low- and moderate-income families and likewise 30 percent of dwelling units were to be for housing located in central cities designated as such by the Office of Management and Budget (OMB). The amounts were essentially the same as the percentage goals (known as the “30/30 goals”) that had been previously established for Fannie Mae under HUD’s regulations. Authority for the twin 30/30 goals was contained in the 1968 chartering legislation for Fannie Mae, but they were not promulgated until 1979. These goals were not based on any analytical studies, and, as we understand, they were never monitored or enforced. In addition, the 1992 Act established interim “special affordable housing goals” for each enterprise to acquire mortgages serving low-income families in low-income areas and very low-income families. Under these goals, Fannie Mae was to purchase at least $2 billion in such mortgages during the period, while Freddie Mac was required to purchase a volume of at least $1.5 billion. According to HUD officials, HUD had originally begun research on social goal regulations for the enterprises as early as 1989. The agency’s approach to this area, at that time, was to ensure that the benefits from government sponsorship were equally distributed across all borrowers. Following passage of the 1992 Act and the beginning of the Clinton administration in January 1993, this approach shifted somewhat. HUD’s policy became one in which the enterprises should lead the market for lending to low- and moderate-income and other underserved borrowers, rather than simply mirroring the primary, conforming, conventional mortgage market. HUD officials are presently considering the appropriate scope of this shift. If mirroring the market means that the enterprises fund a share of mortgages benefiting a targeted group equal to the share observed in the overall primary market, “leading the market” could be interpreted to mean that the enterprises should devote larger shares of their funding to targeted groups. If social goal regulations were to require leading rather than mirroring the market, it would be more likely that housing opportunities and affordability for targeted borrowers would be improved. The goals established for the enterprises are based, in part, on the targeted groups’ shares in the primary, conforming, conventional market. The relevant comparison was the primary market because the secondary, conforming, conventional market is so dominated by the enterprises that they would always mirror it. In 1993, HUD published a notice of proposed housing goals under the 1992 Act that included interim goals for the enterprises for 1993 and 1994. Final goals were promulgated on December 1, 1995, effective January 2, 1996. For low- and moderate-income housing, the goals are 40 percent of mortgage purchases during 1996 and 42 percent yearly during 1997 through 1999. The special affordable housing goals (for mortgages of low-income families in low-income areas and very low-income families) are 12 percent of all mortgage purchases in 1996 and 14 percent yearly during 1997 through 1999. The underserved area component replaced the old central city requirement. Purchases are to count toward the goal if the census tract has median income below 120 percent of median income for the overall metropolitan area (nonmetropolitan areas in the state if a rural census tract) and at least 30 percent of the residents are minority. Purchases also count if census tract median income is below 90 percent of median income for the overall metropolitan area and, in rural areas, if census tract median income is below 95 percent of median income for nonmetropolitan areas in the state. For purchases of mortgages on housing located in underserved areas, the goals are 21 percent of purchases in 1996 and 24 percent yearly during 1997 through 1999. HUD estimated the percentage of each enterprise’s purchases in 1994 that met the income, special affordable, and underserved area components in the new final rule (see table 4.1). Fannie Mae’s 1994 production levels exceeded the goals set for the remainder of the decade in the final rule. Freddie Mac’s 1994 production exceeded the underserved areas goal but fell short of the low- and moderate-income and special affordable goals set for the remainder of the decade. Each enterprise’s production toward each goal in 1994 exceeded the share attained the previous year. Officials of both enterprises told us that their charters and the 1992 Act are consistent with their mission requirements to be in all markets at all times. Both enterprises emphasized that their standard programs are designed to benefit all homebuying borrowers—including those that are low- and moderate-income, minority, or underserved area residents, and both have targeted lending programs to support homeownership and housing affordability for targeted groups. The enterprises, however, have differing perceptions of how they should respond in meeting the regulatory social goals. Fannie Mae has a number of special programs that are designed to reach out to central city, low-income, and minority and ethnic group borrowers who may feel disenfranchised from the housing finance market and the attainment of homeownership. Fannie Mae officials stress the importance of their outreach efforts with community groups in this process. These efforts are reflected in Fannie Mae’s strong support for a central city lending goal, which they argue is legally required by the 1992 Act. Fannie Mae also has consistently purchased mortgages supporting multifamily rental housing, which is reflected in its support for the special affordable housing goal. Fannie Mae officials generally view the low-income, central city, and special affordable goals as a reaffirmation, in part, of their housing finance mission. Fannie Mae officials told us that their standard business practices, in addition to their special programs, provided benefits to customers with characteristics similar to the targeted groups. For example, because the fees they charge on MBS may not be risk-based, borrowers who make high down payments may be charged more and those who make low down payments less than they would be charged if fees were truly risk-based. Fannie Mae officials said that the general intent of such a cross-subsidy would be to facilitate first-time homeownership. They also indicated that this form of cross-subsidy is not systematically related to borrower income. Fannie Mae officials said that whatever cross-subsidization affected their targeted lending programs was due to the extra administrative costs of these programs. About 8 percent of Fannie Mae’s 1994 purchases were accounted for by targeted lending programs. They stated that the benefits for targeted borrowers tend to be textural compared to explicit subsidy programs where benefits can be more easily quantified. The expected benefit of many of these outreach efforts is to bring more households, including future generations, into the housing finance and homebuying system. Freddie Mac has a number of pilot programs designed to identify cost-effective methods to expand housing opportunities. The intent is to identify such methods for subsequent implementation into standard Freddie Mac mortgage products. The programs’ primary emphasis is on identifying inefficiencies in mortgage markets that could result from possible discrimination and arbitrary underwriting standards. Freddie Mac officials said they generally view HUD’s social goal regulatory enforcement as a monitoring device to verify that the enterprises are serving all parts of the primary mortgage market rather than as a device that has a substantial independent effect on their allocation of mortgage credit. Freddie Mac officials emphasized the role of their special affordable targeted lending initiatives as pilot programs meant to identify cost-efficient methods to expand homeownership opportunities. For example, Freddie Mac officials emphasized their Underwriting Barriers Outreach Group (UNBOG) activities in reaching out to prospective homebuyers and expanding homeownership opportunities. UNBOG created focus groups comprising members of organizations involved in community lending issues. They were asked what Freddie Mac underwriting standards were perceived as being barriers to community lending, especially in communities that could be considered underserved. On the basis of the responses of the focus group participants, Freddie Mac clarified underwriting standards that were perceived as creating barriers to lending in particular communities. The clarifications apply to Freddie Mac’s standard purchase programs. This effort appears to be consistent with Freddie Mac’s philosophy that its major mission is to make sure that all parts of the primary mortgage market are served by its products. Fannie Mae is devoting extensive resources to special programs to meet social goal requirements and help fulfill its housing mission. Freddie Mac is devoting extensive resources to pilot programs and related activities, such as its Underwriting Barriers Outreach Group program, to expand housing opportunities generally and in areas and to groups that are perceived to be underserved. Privatization would likely cause a decline in such efforts by the enterprises. However, neither we nor the enterprises are able to quantify the impacts of these efforts on housing affordability and homeownership opportunities among different borrowers. Whatever quantification of these efforts exists is generally a measurement of resource commitments and not outcomes, such as the impacts on mortgage interest rates and housing affordability for targeted groups. A recent Federal Reserve Board study estimated the amount of credit risk on lower income and minority borrower mortgages taken on by different participants in the mortgage market. Although the study does not measure outcomes related to housing affordability and homeownership opportunity, it does estimate the supply of one of the more important inputs, namely the ability and willingness to undertake credit risk, affecting the supply of mortgage credit. The authors expected that the enterprises would promote homeownership among lower income households more than entities such as depository institutions. They found, however, that depositories take on more of the total credit risk associated with lower income lending than the enterprises. From this they concluded: “The difference may arise because Fannie Mae and Freddie Mac, unlike depositories, generally have no interactions with borrowers and are not located in the neighborhoods where the mortgages are originated; thus they lack the opportunity to look beyond traditional measures of risk.” Thus, the enterprises, as secondary market participants, may not be as well situated as a primary lender to effectively distinguish more creditworthy targeted groups from less creditworthy targeted groups. There are other reasons why knowing the extent of the enterprises’ resource commitments is not sufficient to allow quantification of the program outcomes for targeted borrowers. First, it is necessary to determine how much the social goals serve to increase enterprise funding levels to targeted borrowers compared to what they would have been without the goals. On this score, we observe that the enterprises have increased mortgage funding to targeted groups. Even if some of this increase is due to other economic factors, the goals have likely caused part of this expansion. Second, it would be necessary to determine how mortgage originations by other lenders, namely depository institutions who undertake portfolio lending and mortgage bankers who originate federally insured mortgages for Ginnie Mae mortgage pools, are affected and respond to this change in funding. On this score, we are uncertain. Assuming privatization and no adjustments or change in any federal mechanism supporting housing affordablity and homeownership, it is likely that mortgage interest rates could increase by about 15 to 35 basis points on average, with larger increases likely for homebuyers making relatively small down payments (as discussed in ch. 3). One of the five studies commissioned to help assess privatization analyzed the implications of higher mortgage interest rates on housing affordability and homeownership. The authors developed an economic model in which underwriting requirements created constraints (such as minimum downpayments or monthly payment to income ceilings) that would keep some prospective borrowers from purchasing a home of the size and value they would be expected to prefer on the basis of household characteristics and expected future income patterns. The authors performed a statistical simulation to estimate the impact of a 50 basis point increase in mortgage interest rates on homeownership. They used 50 basis points as an upper bound of how much privatization could affect mortgage rates. The authors estimated that their baseline homeownership rate of 63.6 percent would have been about 1.2 percentage points lower if mortgage interest rates had been 50 basis points higher as a result of privatization (see table 4.2). The estimated impacts on minority, low-to-moderate income, and young households would have been more pronounced; the respective estimated downward impacts on the homeownership rate would have been about 2.8 (from a baseline of 43.9 percent), 2.6 (from a baseline of 45.7 percent), and 3.5 percentage points (from a baseline of 33.7 percent). The authors’ statistical analysis indicated that the primary impact of the interest rate increase associated with privatization on homeownership rates was due to an increase in the relative cost of homeownership compared with the cost of rental housing. The remainder was accounted for by the estimated impacts of privatization on down payment and monthly payment to income constraints associated with underwriting standards. For example, with higher mortgage interest rates, more potential homeowners would find that their ratio of mortgage payments to income would be above current underwriting standards. Because the authors are comparing the cost of homeownership to the alternative of renting, the relative cost impact depends on the authors’ analysis of the impacts of privatization on multifamily housing. On the basis of limited statistical information, the authors found that privatization would not have a significant impact on mortgage interest rates on multifamily housing. As a result, they concluded that multifamily housing concerns should not be the basis of policy decisions on privatization. In addition, this conclusion affected the authors’ estimates of the effects of privatization on homeownership because the most important variable used to estimate the homeownership impact is the relative cost of owning versus renting a housing unit. Their estimates assume that privatization would increase single-family mortgage interest rates, and therefore the cost of owning, but not have any significant effect on multifamily mortgage interest rates, and therefore the cost of renting. If the cost of rental housing were affected by privatization proportionally to the cost of owner-occupied housing, the relative cost of owning versus renting would be unaffected, and most of the estimated impacts on homeownership rates would have disappeared. In addition, privatization could cause mortgage interest rates on single-family rental housing, and thus rental costs on such housing, to increase. The primary reason why it appears unlikely that the supply of multifamily housing would be affected by privatization is that the enterprises currently finance little multifamily activity and with privatization are more likely to do less than more. The enterprises’ purchases of multifamily mortgages represent a small share of their total purchases. Fannie Mae’s purchases of multifamily mortgage purchases accounted for $4.8 billion in 1994, about 3 percent of total Fannie Mae purchases. Fannie Mae officials told us that the $4.8 billion was about 11 percent of total 1994 multifamily mortgage originations. They added that Fannie Mae’s 1995 multifamily purchases were $6.5 billion, which were about 20 percent of total multifamily originations. Freddie Mac purchased $913 million in multifamily loans in 1994, less than 1 percent of total purchases. Unlike the jumbo market, there are no prohibitions or constraints keeping the enterprises from expanding in this area. It is their existing and prospective social goals that are motivating much of the multifamily financing they are currently doing or are contemplating for the future. Without those goals, they would probably do less. However, due to their limited role, such a reduction or withdrawal is not likely to have much effect on either the supply or the rental cost of multifamily housing. The authors also distinguished between the impacts of privatization on current ownership and when households first become homeowners.Borrowing constraints created by mortgage underwriting standards can be overcome over time if households save for a downpayment over a longer period of time. In addition, borrowing constraints tend to be greater for households with low current income who have relatively high levels of expected income in the future, because the optimal house that first-time homebuyers purchase is dictated in part by their expected future incomes. The authors found that a majority of the households with low current income had relatively high levels of expected future income. Therefore, it can be expected that one of the primary impacts of privatization on homeownership would be to delay rather than permanently preclude homeownership for the group of households with low current income and relatively high expected future income. Even if privatization’s effect on interest rates were only to delay and not preclude homeownership, such a delay could still have social costs. Among the many reasons stated by Members of Congress for providing favorable tax and financial treatment to homeownership is the belief that owning a home fosters wealth accumulation and family stability. If so, then attaining homeownership at a younger age by households with relatively low but rising incomes could help promote such social goals. Furthermore, the attainment of homeownership by households with low incomes that are not expected to increase could include the accrual of wealth accumulation and family stability over protracted time periods as well as other benefits, such as fostering stronger community ties among neighborhood residents. HUD’s social goal regulation of the enterprises represents one of a number of federal government mechanisms that support housing affordablity and homeownership. Various federal agencies support homeownership. For example, FHA and VA lower ownership costs by guaranteeing mortgages with favorable terms for qualified individuals. Ginnie Mae guarantees timely payment of principal and interest from mortgage pools of FHA- and VA-insured mortgages. The Federal Home Loan Bank System lends to mortgage lenders so they can originate and fund mortgages. Federal financial institution regulators also have responsibilities under the Community Reinvestment Act to encourage banks and thrifts to help meet the credit needs in all areas of their communities, including low- and moderate-income areas. These regulators also enforce fair lending laws that prohibit discriminatory lending practices. Privatization would likely provide the enterprises with new incentives, including an altered cost structure and few if any restrictions on their activities. As discussed in chapter 3, the resulting secondary market entities would likely operate as conduits rather than operate directly in the primary market or hold many mortgages in portfolio. They would also not be likely to develop low down payment mortgage products or purchase and securitize multifamily mortgage products. The enterprises’ programs aimed at targeted groups, in general, are more costly than their standard business. Fannie Mae officials told us that most of their targeted lending products were more costly than standard mortgage products. For example, in our review of the enterprises’ targeted lending programs, we found that default rates were substantially higher on purchases through those programs. We examined Fannie Mae mortgage default and borrower targeting statistics comparing targeted lending programs and standard business for mortgages purchased in 1994. The difference in the default rates appears to result from the higher loan-to-value ratios and the easing of other underwriting restrictions in the targeted lending programs. This finding is consistent with preliminary analysis at the Office of Federal Housing Enterprise Oversight (OFHEO) indicating that enterprise funded loans with loan to value ratios above 90 percent, in census tracts in which incomes are below metropolitan area median income, and where more than 30 percent of residents are minority group members default more often than other mortgages purchased by the enterprises. As designed, Fannie Mae’s targeted lending programs purchase larger shares of loans made to low-income, central city, minority, and first-time homebuyer borrowers compared to standard business. As a result, privatization is likely to reduce the significant resources the enterprises are currently expending on these targeted borrower programs. Because neither we nor the enterprises have been able to quantify the impact of these efforts, however, it is difficult to know whether privatization would have a significant effect on affordability or homeownership opportunities among targeted groups. The potential impacts of privatization on social goal attainment depend, in part, on how well targeted borrowers would be able to obtain financing from depository institutions and primary lenders who originate FHA-insured loans. The FHA single-family mortgage insurance program serves many lower income, minority, and central city borrowers and these loans are securitized in Ginnie Mae MBS. It is not clear how well these FHA programs serve or could serve targeted borrowers compared with how the enterprises, without privatization, would serve similar borrowers. Likewise, the FHA multifamily insurance program is a possible policy alternative to multifamily products now being developed by the enterprises. However, the potential increased reliance on FHA and VA programs resulting from privatization could increase the total risk of these programs. If privatization occurred and alternative policy levers could not be developed for the ensuing secondary market participants, there are other mechanisms available for achieving such goals. For example, financial institution regulators could develop new Community Reinvestment Act requirements that improve the incentives depository institutions face for originating mortgages to targeted groups that are sold in the secondary market. Mortgage bankers are not subject to regulations such as the Community Reinvestment Act (CRA). Some mortgage bankers have entered into agreements with HUD concerning the distribution of their mortgage origination activities. The current social goal regulations motivate the enterprises to compete for loans originated by mortgage bankers to designated borrowers. Privatization may sever this tie. Therefore, if privatization occurred, it may be that some new mechanism could be created to give mortgage bankers incentives to originate mortgages to these targeted groups. In oral comments on a draft version of this chapter, a Fannie Mae official said that Fannie Mae appreciates the report’s recognition of the commitment that the organization has made to affordable housing and targeted financing overall. However, he said that the draft report was inexplicably reluctant to draw unqualified conclusions about the success of the enterprises’ efforts in promoting homeownership for targeted groups. Additionally, he said that privatization would result in higher rental costs for occupants of multifamily residences, and he said that Home Mortgage Disclosure Act (HMDA) data has substantial limitations for assessing the enterprises’ efforts to promote homeownership among targeted groups. The Fannie Mae official also said that the draft report neglected to mention that increased reliance on other federal programs designed to promote homeownership, such as FHA and VA, will increase the risks of a taxpayer rescue. Fannie Mae officials also provided technical comments, which we have incorporated where appropriate. Freddie Mac officials said that privatization would result in higher rental costs because owners of single-family rental housing would pass increased mortgage rates on to their tenants. The Fannie Mae official said that the draft report ignored substantial evidence that the enterprise’s commitment to the housing goals has increased homeownership opportunities for targeted groups. For example, he said that Fannie Mae provided tracking data for the years 1993 to 1995 that clearly show the enterprise’s overall share of business serving low- and moderate-income groups has increased consistently. He also said that there are quantifiable measures of the success of Fannie Mae’s efforts to make mortgage financing more affordable for certain targeted groups; for instance, the use of higher debt-to-income and loan-to-value ratios means that targeted groups can more easily qualify for mortgages. Moreover, he said there is no reason to expect that the social goals would be retained in any form in the event of privatization. He noted that private sector conduits that perform similar functions as the enterprises are not subject to social goal requirements. The Fannie Mae official also disputed the Wachter and Follain finding that privatization would not have a significant effect on mortgage interest rates for multifamily housing. He said that Fannie Mae’s commitment to this market predates the social goals, but its extensive innovation and outreach efforts would likely be curtailed in the event of privatization. He said that this would have genuine effects on capital availability for affordable rental housing development. In addition, the Fannie Mae official said that the enterprise would probably respond to privatization by curtailing more flexible credit tests and higher loan-to-value ratios which the enterprise currently use to increase its participation in the market for multifamily housing. The Fannie Mae official further commented that HMDA data is not a reliable basis for determining, as the draft report stated, that the enterprises lagged other mortgage market participants in providing credit to low- and moderate-income groups. For example, he said that many such mortgages that the enterprises purchase are not credited by the HMDA data. He attributed this shortcoming to the fact that only the first mortgage sale is recorded by HMDA, and some mortgage lenders are not covered and this can be a mortgage affiliate or other player that eventually sells the mortgage to Fannie Mae or Freddie Mac. The Fannie Mae official also commented that the FHA and VA mortgage programs and other options we listed could not possibly substitute for the dollar volume commitments that the enterprises make each year to purchase low- and moderate-income mortgages. Moreover, he said that relying on these programs further does not necessarily represent good public policy because it would shift potential loss liabilities directly to the federal government and the taxpayers and the options are not viable. He also said that it is highly speculative to assume that Congress would enact CRA-type requirements for the enterprises in the event of privatization. Freddie Mac officials also said that potential taxpayer risks would increase with privatization due to increased reliance on the FHA and VA programs, as well as insured depository institutions. Further, they stated that privatization would result in depositories increasing their use of FHLB advances and generating additional taxpayer risks. The Freddie Mac officials also said that renters would likely face higher housing costs in the event of full privatization because the owners of single-family rental properties would pass increased mortgage costs on to their tenants. We believe it is still too early to measure the impact of the enterprises’ social goals on the provision of additional housing finance to targeted groups. For that reason, in the report we presented information we obtained during this assignment on the resource commitments the enterprises are making to fund mortgages serving targeted borrowers. In addition to not being able to draw unqualified conclusions about the effects of existing programs, it is even more difficult to predict the effect of privatization largely because we do not have enough information to predict (1) how eliminating the enterprises’ social goal obligations will interact with other federal mechanisms, (2) what requirements HUD would have set in the future without privatization, and (3) the market impact of eliminating social goals on housing affordability. We do not have a basis for knowing whether the limited coverage of HMDA biases estimates of the enterprises’ contributions to funding mortgages to targeted borrowers. The increased reliance on FHA and VA programs resulting from privatization could increase the total risk of these programs, although it could also lower their average level of risk if the enterprises’ expanded efforts are taking away the more, rather than the less, profitable business of these federal insurance programs. We acknowledge that privatization could cause mortgage interest rates on single-family rental housing, and thus rental costs on such housing, to increase. The report indicates that the enterprises’ overall share of business serving low- and moderate-income groups has increased consistently. We do not know how much this has increased homeownership opportunities for targeted groups, although the results from the commissioned study by Wachter and Follain, as discussed, indicated that privatization could reduce homeownership opportunities. The best evidence we have available to assess the enterprises’ impact on mortgage interest rates for multifamily housing is from the Wachter and Follain study, which concludes that privatization would not radically alter the current situation. We do not think it is clear how well other federal programs and other mortgage providers could fill the void that could result from privatization. The enterprises fund many mortgages, including those serving targeted groups. With privatization, some of this activity would be curtailed. The Federal Reserve Board study on credit risk referred to in this report suggests that depository institutions may be able to profitably serve some of these affected borrowers. We do not know how much extra business FHA programs could be faced with if the enterprises were privatized. Privatization of the enterprises would clearly be a major policy change. As such, it would require a careful examination of the benefits and costs and involve difficult policy choices that only Congress can make. Should Congress decide that privatization is worth pursuing, there are a number of ways it could structure the transition to privatization. Each of these has advantages and disadvantages. For example, an approach designed to be least disruptive to the mortgage market might leave institutions that were still perceived as too big to fail. As a result, such an approach might not fully break the government ties that cause the market to perceive an implied guarantee. Alternatively, an approach that more effectively broke those ties by breaking up the privatized enterprises into smaller companies could reduce some of the potential benefits from mortgage standardization and maintenance of liquidity in the market. Privatization is only one alternative to the status quo. There are other policy options, short of privatization, that would adjust the activities or responsibilities of the enterprises in such a way that the potential public benefits generated by government sponsorship could potentially increase or the size of enterprise activity or the riskiness of that activity to the government could be decreased. The latter could reduce the potential cost should the federal government ever decide to bail out a failing enterprise. We selected alternatives from among what appeared to be the most frequently mentioned in the available literature while attempting to identify a variety of approaches. The four alternatives we discuss include: lowering or freezing the conforming loan limit, increasing minimum requirements for mortgage insurance coverage, charging the enterprises for the government’s risk exposure, and authorizing another government-sponsored enterprise to compete with Fannie Mae and Freddie Mac. Should Congress decide to privatize the enterprises, it would be important to achieve a clear and deliberate elimination of the special benefits and restrictions the enterprises have under their current federal charters. To be successful, the legal transition to privatization would need to be structured to eliminate investor perceptions of an implied federal guarantee so that other private companies could compete in the secondary mortgage market on a level playing field. This perception is key to the fact that the enterprises are the only two important competitors in the conventional, conforming secondary mortgage market. Privatization would be more likely to lead to more secondary market competitors if the enterprises’ special advantages were clearly removed. A transition to privatization would have to deal with a number of trade-offs. First, the number of successful competitors would be determined in part by the structure of the transition. If Congress were to create more competitors initially, this could act to reduce market liquidity and standardization. However, the number of competitors that ultimately prevail in the secondary market would be partly limited by market forces, including how much investors value market liquidity. Second, the newly privatized enterprises must have the managerial, capital, and other resources necessary for them to be successful going concerns without preventing entry into the conforming secondary mortgage market by potential competitors. Engineering the restructuring necessary for the transition would require extensive expertise by legal and financial experts. This engineering would also involve trade-offs among competing objectives and create policy challenges. Generally, the larger the new enterprises are, the greater the risks that investors would continue to perceive an implicit federal guarantee, because the enterprises could be considered too big to fail and there would be increased potential cost to taxpayers if the enterprises were rescued by the government; and the enterprises, because of their size and the possible remaining perception of an implicit federal guarantee, would exercise market power in business activities outside of the secondary mortgage market for conventional, conforming residential mortgages. One approach would be to make each enterprise a holding company with two subsidiaries—one subsidiary conducting liquidation of old (that is, preprivatization) business and the other, conducting new business. The proposed privatization of the Student Loan Marketing Association (known as Sallie Mae) contains such a structure. Segregation of securities created under government sponsorship and new private entity securities would help sever the perceived implied federal guarantee on post-privatization business, although it could strengthen the tie on old business. If outstanding debt and MBS previously issued by the enterprises as government-sponsored entities were to be segregated, market stability and liquidity are less likely to be jeopardized, because the liquidating subsidiary’s securities would be more likely to keep their current government-sponsored status. In addition to this option, the study commissioned by CBO assessed other possible approaches to restructuring. These included creating two separate privatized companies that receive an allocation of resources along with government actions to liquidate the terminating government-sponsored enterprises; and a number of separate privatized companies, which would break up Fannie Mae and Freddie Mac into smaller operating companies, followed by restructuring to remove government- sponsorship from the successor companies. The first of these options may be more likely than the “old company new company” approach to prevent the perception of an implied federal guarantee on new business, because all of the old obligations that were thought to have the implied guarantee would be liquidated. However, liquidating such a large amount of existing debt and MBS could disrupt financial markets. The second option could be the most conducive to insuring competition and to eliminating the “too big to fail” perception, because there would presumably be a larger number of smaller companies created out of the current enterprises. However, forcing the new companies to be small could reduce efficiencies associated with standardization and liquidity. To address adjusting the activities or responsibilities of the enterprises to increase the public benefits and/or reduce the overall size of enterprise debt or the probability that the government may have to rescue a failing enterprise, we examined four policy options. We identified a range of policy alternatives from our examination of the policy literature. The following four alternatives we chose to discuss involve trade-offs among competing policy interests, should not be construed as our proposals, and by no means exhaust the possible policy alternatives Congress may want to consider. The list includes (1) lowering or freezing the conforming loan limit, (2)increasing minimum requirements for mortgage insurance coverage,(3)charging the enterprises for the government’s risk exposure, and (4)authorizing another government-sponsored enterprise to compete with Fannie Mae and Freddie Mac. It appears that a lowering or freezing (i.e., not allowing inflationary adjustments) of the conforming loan limit would have a number of effects. First, it could reduce the amount of enterprise activity without greatly limiting the ability of the enterprises to diversify risk and thereby should reduce the potential taxpayer risk in the event of a government bailout. This reduction could be offset somewhat, because some of the activity that currently fits under the conforming label but would not fit under the tighter ceiling may end up in the portfolios of depositories rather than being securitized. To the extent this occurs, there could be an increase in potential taxpayer exposure. For example, depositories taking on more credit risk could raise the risk exposure of the deposit insurance funds. If the depositories are members of the Federal Home Loan Bank System that receive additional advances, the potential taxpayer exposure of this system could increase. Second, mortgage interest rates for borrowers that would shift from conforming to jumbo mortgage status would probably increase. There is currently an interest rate spread between fixed-rate conforming and jumbo mortgages. The study commissioned by HUD examining this spread predicted that a 10-percent decline in the conforming loan limit would likely lead to an increase in mortgage interest rates on affected mortgages near the lower end of the 25 to 40 basis point range. Third, there could be a decline in mortgage interest rates for the remaining jumbo market to the extent that private-label conduits would choose to expand and become better able to geographically diversify their funding.The expected decline in mortgage interest rates would still, however, probably leave jumbo rates above those on conforming mortgages. The enterprises are not allowed to purchase mortgages with loan-to-value ratios above 80 percent unless the borrower obtains mortgage insurance. In 1995, the enterprises changed their underwriting guidelines and now require greater insurance coverage on mortgages with loan-to-value ratios exceeding 85 percent. If Congress legislated higher requirements for mortgage insurance coverage, the enterprises would be exposed to less credit risk. Simply put, when mortgage defaults occurred, more of the burden would fall on private mortgage insurers that have no federal ties and less would fall on the enterprises. The reduced risk taken on by the enterprises would reduce the likelihood that the enterprises would need to be bailed out, and the potential risk to the taxpayer would be reduced as well. Mortgage interest rates would likely increase. If for no other reason, the capital costs of private mortgage insurers tend to be higher than the enterprises’ costs because private insurers have no federal ties. Mortgage interest rates would likely increase more for borrowers making downpayments below any legislated minimum, because private mortgage insurers charge fees that are more fully risk-based than the guarantee fees charged by the enterprises. An alternative type of policy approach would charge the enterprises to compensate, in whole or in part, for the risk exposure that their activities generate for the government and taxpayers. One such alternative is a fee, sometimes referred to as a user fee, that could provide a full or partial offset for the estimated benefits received from government sponsorship. Levying user fees on the value of enterprise debt and MBS issuance could be thought to compensate taxpayers for the possibility that they might be asked someday to come to the rescue of a failing enterprise. User fees could be passed onto borrowers in part or in whole and result in higher interest costs. The net effects would depend on the level of user fees. User fees on the enterprises could help level the playing field between the enterprises and private-label conduits and motivate these conduits to securitize conforming mortgages, because the cost of funds differential would be reduced. CBO analyzed the federal revenue consequences of user fees on enterprise debt and MBS. CBO’s revenue projection was based on estimates indicating that the enterprises probably save more than 30 basis points on their debt and more than 5 basis points on their MBS. Annual revenue from a user fee equal to half of the dollar amount of estimated funding cost savings was estimated to be about $700 million. The passing on of part or all of this payment to borrowers would raise mortgage interest rates. Determining the correct level of such a fee would be difficult because of problems associated with measuring the value of the funding cost savings resulting from investors’ perception of an implied guarantee. Another difficulty is determining the possible interaction between a user fee and regulatory capital charges. OFHEO told us that user fees that are set through legislation are a fairly blunt instrument, while the risk-based capital requirements that OFHEO is developing could be flexible over time. Both user fees and capital requirements increase the cost of capital to the enterprises, which can, in turn, pass on to borrowers some or all of these costs in the form of higher guarantee fees and interest rates. If Congress legislated user fees, OFHEO’s ability to set capital charges to manage enterprise risk taking could be affected, because both actions would increase enterprise costs and could contribute to higher mortgage interest rates. In other words, user fees and capital requirements must be viewed in conjunction with one another to determine cost impacts on the enterprises and residential mortgage borrowers. A somewhat different approach to compensating the government for its risk exposure would attempt to make that exposure more symmetrical than it is currently. If the government felt the need to rescue a failing enterprise, clearly it would face “downside” risk. However, when the result of enterprise risk-taking is additional income, the government shares only to the same extent it shares with any private company, that is through increased corporate income tax revenue. One way to make the government’s payoff more symmetrical would be for the government to receive a greater share of income in good times to make up for the possibility it will have to come to the rescue if the enterprises face bad times. The effects of such a payment would depend on how it was structured. For example, if it were simply a surtax on corporate income, it could end up being passed on to borrowers in the mortgage market or be passed back to shareholders. It could also raise the relative cost of equity capital compared to debt capital and further reduce the incentives of the enterprises to hold equity in the absence of safety and soundness regulation. Privatization would in essence eliminate the enterprises as government-sponsored entities. The three preceding alternatives to privatization would either decrease the government’s risk exposure from enterprise activities or compensate the government in whole or in part for that exposure. A fourth alternative would attempt to increase the public benefits from enterprise activity by lowering mortgage rates through increased competition among enterprises. This alternative would entail authorizing another government-sponsored enterprise with a similar charter and subject to the same regulatory requirements to compete with Fannie Mae and Freddie Mac. This could increase the overall size of enterprise activity in the mortgage market and, as a result, raise the potential at risk in case of a government bailout. It could also increase the level of enterprise risk because entities operating in new markets often have greater managerial and operations risk than those operating in established markets. In addition, there could be increased credit risk if the new entity attempted to establish market share by lowering underwriting standards. Any other potential effects of a third competing enterprise would depend on whether Fannie Mae and Freddie Mac do or do not have market power. If they do not, there is little in the way of efficiency gain to expect from a new competitive force in the market. However, to the extent there is market power, a third competing enterprise could put pressure on the existing enterprises to lower mortgage rates. In addition, because increased competition could motivate fuller use of risk-based guarantee fees, it could reduce the ability of the enterprises to achieve social goals to the extent attainment requires charging targeted groups less than fully risk-based fees. HUD could still set performance measures to attain social goals with increased competition. The possible decline in profit levels and increased use of fully risk-based guarantee fees, however, could lessen (1) HUD’s ability to set demanding performance measures to attain social goals and (2) the ability of the enterprises to unilaterally cross-subsidize funding activities to help achieve their missions.
Pursuant to a legislative requirement and a congressional request, GAO examined the potential effects of privatizing the Federal National Mortgage Association (Fannie Mae) and the Federal National Mortgage Association (Freddie Mac). GAO noted that: (1) the privatization of Fannie Mae and Freddie Mac would have a major impact on both the secondary and primary mortgage markets; (2) if the two government-sponsored enterprises lost the benefits of their federal charter, their costs would increase because they would be responsible for paying Securities and Exchange Commission registration fees on their securities or state and local taxes; (3) by eliminating or reducing the implied federal guarantee on mortgage-backed securities (MBS) and debt, the enterprises' borrowing costs would increase from 30 to 106 basis points if the perceived federal guarantee were completely eliminated; (4) these increased costs would be passed to the homebuyer and the average mortgage interest rate would increase by 15 to 35 basis points; (5) eliminating the cost advantages of federal sponsorship could spur more competition and retain liquidity in the secondary market because firms would find it profitable to purchase and securitize conforming mortgages; (6) privatization could stabilize the securities market and prevent it from experiencing regional disparity; (7) privatization would pose an adverse threat to the enterprises' financial performance because they would be more dependent on their strategic business decisions and total quality management; (8) low and moderate-income borrowers would be most impacted by the enterprises' privatization because the federal programs providing credit to these groups would be eliminated; and (9) alternative initiatives should be studied to limit the risk to taxpayers.
As shown in figure 1, expenditures for the section 515 program increased throughout the 1970s, peaked in 1979, and fell sharply after that. In recent years, the program has received about $115 million annually and has allocated $55 million for new construction, $55 million for rehabilitation, and $5 million for equity loans. The president’s budget for fiscal year 2003 proposes to eliminate the new construction funding. The number of units added to the portfolio each year has followed the funding curve. During the peak funding years, over 20,000 new units were added to the portfolio annually. Fewer than 5,000 new units have been produced annually since 1995. In 1998, RHS created the Office of Rental Housing Preservation to administer the prepayment program. Mandated in the Housing and Community Development Act of 1992, the office’s tasks include improving the effectiveness and integrity of the agency’s prepayment and preservation processes. As of fiscal year 2001, the average size of an RHS property was 27 units. About 8 percent of the properties, comprising about 5 percent of the units, were owned by small operators, often families, while most of the other properties had a more complex ownership structure—typically a managing partner, who owned 5 percent of the property, and many limited partners with smaller shares. About half of the section 515 units receive RHS rental assistance, which makes up the difference between 30 percent of the assisted household’s income and the unit’s rent. About 14 percent of section 515 units have HUD project or tenant-based section 8 rental subsidies, which cover the difference between tenants’ payments and fair-market rents, as determined by HUD on the basis of an annual survey of rents in over 2,700 market areas. Therefore, in those areas where fair-market rents are typically higher than the rents approved by RHS, section 515 properties with section 8 assistance usually generate more income for the owners. Both RHS and HUD provide project-based rental assistance, meaning that the assistance stays with the unit. HUD’s section 8 voucher program provides tenant-based vouchers, meaning that the assistance stays with the tenant and is portable—households can use vouchers to rent any affordable units that meet HUD’s housing quality standards. In the program’s early years, it was expected that the original loans, which are amortized over 40 or 50 years, would be refinanced before major rehabilitation was needed. However, with prepayment restrictions and limited rental assistance and rehabilitation funds, this original expectation has not been realized. To maintain the properties in good condition, RHS relies on owners to put aside funds in a reserve account. RHS requires borrowers to place 1 percent of the original cost of the properties into the reserve account each year for the first 10 years until 10 percent is held in reserve. The borrower must continue to make contributions to the reserve account to maintain it as withdrawals are made against the account to fund rehabilitation work. RHS is concerned about the adequacy of funding reserves at only 1 percent per year for 10 years and how to determine exactly what must be done on an ongoing basis to preserve each property. While owners are required to set aside a portion of their rent revenue in a reserve account to provide for modernization needs, these reserve accounts have often not been large enough to adequately provide for major rehabilitation. Concerns about the loss of affordable units led Congress to enact legislation designed to keep section 515 properties in the portfolio and to protect low-income tenants from being displaced. Figure 2 details the key legislation. The legislation restricting prepayment of section 515 loans has resulted in litigation. Owners of section 515 properties who wished to prepay the loan pursuant to their original loan agreements and remove their properties from the section 515 program have sued the federal government. The owners claim that the federal government, with the enactment of the legislation and the subsequent refusal by RHS to accept unfettered prepayment, committed a breach of contract and an unconstitutional taking of their properties. The federal government maintains that no such breach occurred. To date, prepayment activity has been minimal. Over 4,550 new properties entered the portfolio since the 1988 prepayment restrictions went into effect. This number far exceeded the number of properties that left the portfolio after prepayment. For example, RHS data for fiscal years 1998 through 2001 show that fewer than 100 properties, on average have left the portfolio each year. Fiscal year 2001 is the only year when the number of prepayments exceeded the number of properties added to the portfolio. However, this exception reflects a decline in funding rather than an increase in prepayments. RHS officials noted that prepayment requests were particularly limited in 1995 after an RHS administrative notice, citing an application processing backlog and limited funding, resulted in discouraging owners from applying for prepayment. Since 1988, the impact of prepayment has been minimized by a statutory restriction on owners who prepay by stipulating that, under certain circumstances, the rents for tenants not be increased for as long as they remain in the units. During fiscal years 1999 through 2001, the owners of 283 properties prepaid their loans. Following prepayment, 86, or about 30 percent, of these properties left the program without restrictions because RHS determined that these properties were not needed in the market area and their departure would not adversely affect housing opportunities for minority households. The loans for 197, or about 70 percent, of the properties were prepaid with restrictions on the rents of RHS-assisted households that would remain in effect as long as these households continued to reside in the properties. The owners of 88 other properties applied for prepayment but decided, instead, to accept RHS incentives to stay in the program for 20 more years. Table 1 shows the prepayments by fiscal year. If the statutory requirement covering loans made before December 15, 1989, were changed to allow prepayment without restriction after 20 years from the date of the loan, we estimate that prepayment could be an option for the owners of 3,872, or about 24 percent, of the 16,366 section 515 properties. This estimate is based on our analysis of three factors that we could measure and that RHS and industry representatives agree would limit the potential for prepayment and conversion to market-rate rents. However, a number of economic constraints on individual properties, which we could not readily measure, would be likely to limit the number of actual prepayments even further. Nevertheless, despite these potential constraints, RHS officials are concerned that owners who are dissatisfied with RHS’s procedures and statutory requirements could apply to leave the program if the opportunity arose even if prepayment were not economically advantageous. As shown in Figure 3, as of January 1, 2002, there were 3,772 section 515 properties that had served low-income households for 20 years or were financed before 1979 and were never subject to a 20-year low-income use restriction. In our analysis, we found that owners of 946 of these properties could consider applying for prepayment. The loans on another 6,457 properties were eligible for prepayment; however, the properties were still subject to a 20-year use restriction expiring between January 1, 2002, and December 15, 2009. We also found that over the next 8 years owners of 2,926 of these properties would be able to consider prepayment after they meet the 20-year restriction. The loans made on 6,137 properties on or after December 15, 1989, were not eligible for prepayment because the statute in effect when the loans were made precluded prepayment. Our estimate of the number of properties whose owners could consider prepaying is based on three factors that RHS and industry representatives believe limit the potential for prepaying. These factors are as follows: Ownership by a nonprofit organization or public entity. Prepaying mortgages in an attempt to gain financially through converting to market- rate rents could conflict with these organizations’ basic mission of providing high-quality, affordable housing for low-income families. Heavy dependence on RHS rental assistance that would cease upon prepayment. Industry experts and RHS officials in headquarters and the states we visited emphasized that, except in areas where growth has brought unexpected prosperity, high dependence on RHS rental assistance is a strong indicator that a property would have a difficult time maintaining adequate cash flow without such assistance. Location in a county where the population declined in the 1990s. Such properties most likely would not be able to obtain significantly higher rents in the private market than they are receiving under federal subsidies because the relative lack of population growth reduces demand for housing and keeps rents from rising. After adjusting for these factors, we determined that the owners of 3,872 properties, or 24 percent of the total properties, could consider prepaying their loans. The number of loans that actually would be prepaid depends on several property-specific factors that we could not readily measure. Factors affecting prepayment potential include whether individual property owners (1) could operate without the subsidized direct loans, (2) had property located in areas where high rental demand has raised market rents above RHS rents, (3) had the funds or financing to meet future capital needs, and (4) could meet any tax requirements they would incur. For example, in 1986, tax laws were changed to eliminate accelerated depreciation. Owners who entered the program before the 1986 tax law change enjoyed the benefits of accelerated depreciation by annually writing off a larger portion of the original value of the property on their tax return than was permissible after the tax law change. In some cases, owners have fully depreciated their property, leaving them a zero cost basis, instead of the original value of the property, when determining their capital gains liability. While these owners enjoyed the write-off benefits associated with the tax savings, their current tax burden can significantly reduce the remaining proceeds. As a result, some owners are staying in the program to avoid the tax consequences. On the other hand, RHS officials are concerned that owners who are dissatisfied with RHS’s procedures and statutory requirements could apply to leave the program if prepayment were allowed, even if the costs exceeded the expected financial benefits. For example, the acting assistant deputy administrator for multifamily housing said he interprets the ongoing lawsuits and discussions he has had with owners who believe they were mistreated by the government as a strong indicator that psychological factors might override economic considerations if the law were changed covering loans made prior to December 15, 1989. Also, some owners want to get out of the program because of dissatisfaction with RHS’s oversight or because they had planned to use the proceeds from the sale of their properties to fund their retirements. RHS officials were unable to quantify the extent to which these views prevail or could affect the portfolio. RHS officials, however, believe that planned enhancements to its management systems, scheduled to be completed during the summer of 2002, will allow them to better identify property owners and determine the number of properties in the portfolio that are at risk. It should also help them better monitor replacement reserves and other property specific financial matters, which, in turn, could allow them to better predict prepayment potential. Our estimate would also change if HUD tenant-based vouchers were made available or RHS were able to offer tenant-based vouchers. Owners with tenant-based vouchers could then prepay and exit the program but continue to receive federal subsidies for the units where RHS tenants chose to remain. In the program’s early years, it was expected that the original loans would be refinanced before major rehabilitation was needed. However, with prepayment and funding restricted, this original expectation has not been realized, and RHS does not know the full cost of the long-term rehabilitation needs of the properties in its portfolio. RHS field staffs perform annual and triennial property inspections. However, the inspections identify current deficiencies rather than the long-term rehabilitation needs of the individual properties, and RHS does not know the extent to which reserve accounts will be able to cover long-term rehabilitation needs. Without a mechanism to prioritize the portfolio’s rehabilitation needs, including a process for ensuring the adequacy of individual property reserve accounts, RHS cannot be sure it is spending limited rehabilitation funds as effectively as possible and cannot tell Congress how much funding it will need to deal with the portfolio’s long- term rehabilitation needs. RHS state personnel inspect the exterior condition of each section 515 property annually and conduct more detailed inspections of each property every 3 years. However, according to RHS inspection guidelines, the inspections are intended to identify current deficiencies, such as cracks in exterior walls or plumbing problems. Our review of selected inspection documents in state offices we visited confirmed that the inspections are limited to current deficiencies and RHS headquarters and state officials confirmed that the inspection process is not designed to determine and quantify the long-term rehabilitation needs of the individual properties. RHS has not determined to what extent properties’ reserve accounts will be adequate to meet long-term needs. According to RHS representatives, privately owned multifamily rental properties often turn over after just 7 to 12 years, and such a change in ownership usually results in rehabilitation by the new owner. However, with limited turnover and limited funding, RHS properties primarily rely on reserve accounts for their capital and rehabilitation needs, and RHS officials are concerned that the section 515 reserve accounts often are not adequate to fund the rehabilitation of the properties. Without comprehensive information on the physical condition of all the properties in the portfolio, including the adequacy of the reserve accounts, RHS has only been able to provide a wide range of estimates on the amount of funding needed. An August 2000 RHS internal study estimates that without increased funding or policy changes, in 5 years, 25 percent of the section 515 properties will no longer be safe and sanitary. Further, a 1999 internal study estimated that it would take between $800 million and $3.2 billion to meet the properties’ long-term rehabilitation needs. A background paper by the Millennial Housing Commission on preserving affordable housing notes that a reserve account system, such as the one designed by RHS, would be adequate in the private market where greater turnover with higher cash flow is the norm. However, the paper continues that such a system is not reasonable in the public housing market that, by design, does not have the equivalent ability to refinance and generate cash flow. In this regard, the paper noted that reserve systems like RHS’s, are generally adequate to cover only between one-third and one-half of long- term capital needs. RHS and industry representatives agree that the overriding issue for section 515 properties is how to deal with the long-term needs of an aging portfolio. Since 1999, RHS has allocated about $55 million in rehabilitation funds annually, but owners’ requests for funds to meet safety and sanitary standards alone have totaled $130 million or more for each of the past few years. Over the past several years, RHS headquarters has encouraged its state offices to allow individual property owners to undertake capital needs assessments and has amended loan agreements to increase their rental assistance payments as necessary to cover the future capital and rehabilitation needs identified in the assessments. However, with varying emphasis by RHS state offices and limited funding for increased rental assistance, the assessments have proceeded on an ad hoc basis. As a result, RHS cannot be sure that it is spending these funds as cost- effectively as possible. The August 2000 RHS study highlighting the scope of the long-term rehabilitation problem also recommended that the agency seek funding for a physical-needs-assessment study of the existing portfolio, but no funding was requested. USDA’s fiscal year 2003 budget proposal requests funds for RHS to study its multifamily housing portfolio to determine how future construction could be provided at less cost to taxpayers. The proposal does not, however, request funds to obtain a comprehensive baseline of the existing portfolio’s long-term capital needs. With little new construction and limited prepayment, maintaining the long- term quality of aging portfolio has become the overriding issue. While RHS’s practice of allocating its limited funds to properties with documented capital needs has helped properties on an ad hoc basis, RHS does not have a process to determine and quantify the portfolio’s long- term rehabilitation needs. As a result, RHS cannot ensure that it is spending its limited funds as cost-effectively as possible and cannot provide Congress with a reliable or well supported estimate of the funding needed to deal with the portfolio’s long-term rehabilitation needs. To better ensure that limited funds are being spent as cost-effectively as possible, we recommend that the Secretary of Agriculture direct the RHS Administrator to undertake a comprehensive assessment of the section 515 portfolio’s long-term capital and rehabilitation needs. Further, the results of the assessment should be used to set priorities for the portfolio’s immediate rehabilitation needs and to develop an estimate for Congress on the amount and types of funding needed to deal with the portfolio’s long-term rehabilitation needs. We provided USDA with a draft of this report for their review and comment. RHS’s acting deputy administrator for multifamily housing said that our report was thorough and balanced, and he supported the report’s recommendation. He said that the agency is focusing on developing strategies to address the long-term needs of the portfolio, including building a national database. He said that, given the rapidly aging portfolio, the time is ripe to conduct a comprehensive effort to establish credible cost estimates for long-term capital needs. The acting deputy administrator took issue with two points. First, he said that our draft gave the impression that RHS does not know the rehabilitation needs of the properties. He stated that RHS knows the physical condition of each property in the portfolio from its annual field staff reviews, but agrees that the data from the routine inspections are not compiled into a national database that would define long-term portfolio needs. We agree and have revised the report to clarify this point. Second, the acting deputy administrator said that he agrees that heavy dependence on rental assistance would limit prepayments from occurring. However, he said that the factor would be less of a deterrent to prepayment if vouchers were made available to prepaying properties. We added language in the report to clarify this point. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to interested congressional committees and members of Congress; the secretary of agriculture; the director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-7631. Key contributors to this report are Angela Davis, Bess Eisenstadt, Andy Finkel, Curtis Groves, Rich LaMore, John McDonough, and Tom Taydus. Our work was based on a review of published data; discussions with officials from the Rural Housing Service (RHS), the Department of Housing and Urban Development, the housing industry, and RHS property owners; and an in-depth analysis of RHS’s prepayment and section 515 files. We also reviewed background papers prepared by the Millennial Housing Commission and attended a roundtable discussion on housing preservation issues sponsored by the Housing Assistance Council. Furthermore, we judgmentally selected and visited RHS offices in Massachusetts, New Hampshire, and Vermont, where we identified factors that could influence prepayment decisions by property owners. As part of determining how many section 515 properties have been prepaid in recent years and the impact of their prepayment on the section 515 portfolio, we identified key laws and regulations affecting the implementation and operation of the prepayment program. Through discussions with agency officials and reviews of independent publications and legal documents, we identified key changes in the section 515 program, including legislative changes affecting prepayment. Where data was available, we determined the number of properties whose loans were prepaid. We also collected detailed funding and unit production information to document changes in the section 515 portfolio since the program began. To estimate the impact of changing the legislation to allow prepayment without restrictions after 20 years, we planned to survey property owners about their prepayment intentions and obtain specific information from RHS on each property in the section 515 portfolio. However, RHS officials informed us that the information needed to survey the owners was not readily available because RHS’s database did not identify specific owners. In addition, many of the properties are owned by large partnerships whose individual owners are not easily identifiable. While we interviewed a number of section 515 property owners on prepayment issues, we were unable to survey all property owners because the RHS database did not identify specific owners. Therefore we do not know the extent that the views of the owners we interviewed are representative of all section 515 owners. RHS also informed us that specific information about individual properties was not readily available because the agency’s accounting systems track loans rather than properties and most properties had more than one loan. However, RHS combined information from three separate accounting systems that helped us determine the likelihood of prepayment for each property. We were able to isolate 16,507 properties from RHS’s database by identifying loans with the same street addresses and county codes, but we had to drop 141 properties from our analysis because of inaccurate county code information. We reviewed the case files for individual properties at the three RHS state offices we visited. From these reviews and discussions about the properties with the state RHS officials, we identified factors that could help determine the likelihood of prepayment. We also compared information from state office case files with information in RHS’s database. To determine the capital and rehabilitation requirements of the section 515 properties, we evaluated RHS reviews that identified the conditions of the properties and the estimated costs to meet the requirements. We obtained the views of RHS and industry representatives concerning the extent of the rehabilitation needs. We also documented RHS’s inspection processes for identifying rehabilitation requirements at the properties.
Nearly 450,000 elderly and other households depend on federal assistance to live in multifamily rural rental properties that were constructed with subsidized federal loans. Because the properties were built in areas when and where privately financed housing units, affordable by lower income households, were not considered economically feasible, the U.S. Department of Agriculture's Rural Housing Service (RHS) has made direct loans available to developers of affordable multifamily housing under its section 515 program. RHS has funded many more new properties than the portfolio has lost through prepayment. The number of new properties added to the portfolio exceeded the number that left the program after prepayment in every year except 2001. If the statutory requirement restricting prepayment for loans made before December 1989, were changed to allow prepayment without restrictions after 20 years from the date of the loan, prepayment could be an option for the owners of 3,900 of all section 515 properties over the next eight years. RHS field staff routinely inspect properties, complete and retain detailed descriptions of noted deficiencies, and transmit the summaries of the deficiencies identified to a central database. Only current deficiencies are identified, however, so the data are of only limited value for determining the cost of the long-term rehabilitation needs of individual properties.
On February 17, 2002, pursuant to ATSA, TSA assumed responsibility for the security of the nation’s civil aviation system from the Federal Aviation Administration (FAA), including FAA’s existing aviation security programs, plans, regulations, orders, and directives covering airports, air carriers, and other related entities. Among other things, ATSA directs TSA to improve the security of airport perimeters and the access controls leading to secured areas, and take measures to reduce the security risks posed by airport workers. (See app. II for more specific details on ATSA requirements and TSA’s actions to address these requirements.) TSA has 158 FSDs who oversee the implementation of, and adherence to, TSA requirements at the approximately 450 commercial airports nationwide. As part of TSA’s oversight role, it also conducts compliance inspections, covert testing, and vulnerability assessments to analyze and improve security. (See app. III for information on how TSA uses compliance inspections and covert testing to identify possible airport security vulnerabilities.) In general, TSA funds its perimeter and access control security–related activities out of its annual appropriation and in accordance with direction set forth in congressional committee reports. For example, the Explanatory Statement accompanying the DHS Appropriations Act, 2008, directed that TSA allocate $15 million of its appropriation to a worker screening pilot program. TSA does not track the amount of funds spent in total for perimeter and access controls because related efforts and activities can be part of broader security programs that also serve other aspects of aviation security. In addition, airports may receive federal funding for perimeter and access control security, such as through federal grant programs or TSA pilot programs. (For more information on such airport security costs and funding, see app. IV.) Airport operators have direct responsibility for day-to-day aviation operations, including, in general, the security of airport perimeters, access controls, and workers, as well as for implementing TSA security requirements. Airport operators implement security requirements in accordance with their TSA-approved security programs. Elements of a security program may include, among other things, procedures for performing background checks on airport workers, applicable training programs for these workers, and procedures and measures for controlling access to secured airport areas. Security programs may also be required to describe the secured areas of the airport, including a description and map detailing boundaries and pertinent features of the secured areas, and the measures used to control access to such areas. Commercial airports are generally divided into designated areas that have varying levels of security, known as secured areas, security identification display areas (SIDA), air operations areas (AOA), and sterile areas. Sterile areas, located within the terminal, are where passengers wait after screening to board departing aircraft. Access to sterile areas is controlled by TSA screeners at security checkpoints, where they conduct physical screening of passengers and their property. Airport workers may access the sterile area through the security checkpoint or through other access points secured by the airport operator in accordance with its security program. The SIDA and the AOA are not to be accessed by passengers, and typically encompass baggage loading areas, areas near terminal buildings, and other areas close to parked aircraft and airport facilities, as illustrated in figure 1. Securing access to the sterile area from other secured areas—such as the SIDA—and security within the area, is the responsibility of the airport operator, in accordance with its security program. Airport perimeter and access control security is intended to prevent unauthorized access into secured areas—either from outside the airport complex or from within the airport’s sterile area. Individual airport operators determine the boundaries for each of these areas on a case-by-case basis, depending on the physical layout of the airport and in accordance with TSA requirements. As a result, some of these areas may overlap. Within these areas, airport operators are responsible for safeguarding their airfield barriers, preventing and detecting unauthorized entry into secured areas, and conducting background checks of workers with unescorted access to secured areas. Methods used by airports to control access through perimeters or into secured areas vary because of differences in the design and layout of individual airports, but all access controls must meet minimum performance standards in accordance with TSA requirements. These methods typically involve the use of one or more of the following: pedestrian and vehicle gates, keypad access codes using personal identification numbers, magnetic stripe cards and readers, turnstiles, locks and keys, and security personnel. According to TSA officials, airport security breaches occur within and around secured areas at domestic airports (see fig. 2 for the number of security breaches reported by TSA from fiscal year 2004 through fiscal year 2008). While some breaches may represent dry runs by terrorists or others to test security or criminal incidents involving airport workers, most are accidental. TSA requires FSDs to report security breaches that occur both at the airports for which they are responsible and on board aircraft destined for their airports. TSA officials said that they review security breach data and report them to senior management as requested, and provide data on serious breaches to senior management on a daily basis, as applicable. According to a TSA official, the increase in known breaches from fiscal years 2004 through 2005 reflects a change in the requirements for reporting security breaches that TSA issued in December 2005. This change provided more specific instructions to FSDs on how to categorize different types of security incidents. Regarding increases in security breaches from fiscal years 2005 through 2008, TSA officials said that while they could not fully explain these increases, there could be several reasons to account for this growth. For example, according to TSA officials, changes in TSA management often trigger increases in specific types of breaches reported, such as since 2004, when the priorities of the new Administrator resulted in an increase in the reporting of restricted items. TSA officials also stated that a report of a security breach at a major U.S. airport is likely to cause security and law enforcement officials elsewhere to subsequently raise the overall awareness of security requirements for a period of time. In addition, TSA noted that certain inspections conducted by TSA officials tend to produce heightened awareness by federal and airport employees whose perimeter security and access control procedures are being inspected for compliance with regulations. Risk management is a tool for informing policymakers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. Risk management, as applied in the transportation security context, can help federal decision makers determine where and how to invest limited resources within and among the various modes of transportation. In accordance with Homeland Security Presidential Directive (HSPD) 7, the Secretary of Homeland Security designated TSA as the sector-specific agency for the transportation security sector, requiring TSA to identify, prioritize, and coordinate the protection of critical infrastructure and key resources within this sector and integrate risk management strategies into its protective activities. In June 2006, in accordance with HSPD-7 and the Homeland Security Act of 2002, DHS released the NIPP, which it later updated in 2009. The NIPP developed a risk management framework for homeland security. In accordance with the NIPP, TSA developed the TS-SSP to govern its strategy for securing the transportation sector, as well as annexes for each mode of transportation, including aviation. The NIPP and TS-SSP set forth risk management principles, including a comprehensive risk assessment process for considering threat, vulnerability, and consequence assessments to determine the likelihood of terrorist attacks and the severity of the impacts. Figure 3 illustrates the interrelated activities of the NIPP’s risk management framework. Set security goals: Define specific outcomes, conditions, end points, or performance targets that collectively constitute an effective protective posture. Identify assets, systems, networks, and functions: Develop an inventory of the assets, systems, and networks that constitute the nation’s critical infrastructure, key resources, and critical functions. Collect information pertinent to risk management that takes into account the fundamental characteristics of each sector. Assess risks: Determine risk by combining potential direct and indirect consequences of a terrorist attack or other hazards (including seasonal changes in consequences and dependencies and interdependencies associated with each identified asset, system, or network), known vulnerabilities to various potential attack vectors, and general or specific threat information. Prioritize: Aggregate and analyze risk assessment results to develop a comprehensive picture of asset, system, and network risk; establish priorities based on risk; assess the mitigation of risk for each proposed activity based on a specific investment; and determine protection and business continuity initiatives that provide the greatest mitigation of risk. Implement protective programs: To reduce or manage identified risk, select sector-appropriate protective actions or programs that offer the greatest mitigation of risk for any given resource/expenditure/investment. Secure the resources needed to address priorities. Measure effectiveness: Use metrics and other evaluation procedures at the national and sector levels to measure progress and assess the effectiveness of the national Critical Infrastructure and Key Resources Protection Program in improving protection, managing risk, and increasing resiliency. Within the risk management framework, the NIPP also establishes core criteria for risk assessments. According to the NIPP, risk assessments are a qualitative determination, a quantitative determination, or both of the likelihood of an adverse event occurring and are a critical element of the NIPP risk management framework. Risk assessments also help decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the potential effects of the risks. The NIPP characterizes risk assessment as a function of three elements: Threat: The likelihood that a particular asset, system, or network will suffer an attack or an incident. In the context of risk associated with a terrorist attack, the estimate of this is based on the analysis of the intent and the capability of an adversary; in the context of a natural disaster or accident, the likelihood is based on the probability of occurrence. Vulnerability: The likelihood that a characteristic of, or flaw in, an asset’s, system’s, or network’s design, location, security posture, process, or operation renders it susceptible to destruction, incapacitation, or exploitation by terrorist or other to intentional acts, mechanical failures, and natural hazards. Consequence: The negative effects on public health and safety, the economy, public confidence in institutions, and the functioning of government, both direct and indirect, that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack, natural disaster, or other incident. Information from the three elements used in assessing risk—threat, vulnerability, and consequence—can lead to a risk characterization and provide input for prioritizing security goals. While TSA has taken steps to assess risk, it has not conducted a comprehensive risk assessment based on assessments of threats, vulnerabilities, and consequences. TSA officials reported that they have identified threats to airport security as part of an overall assessment of threats to the civil aviation system. While TSA has conducted vulnerability assessment activities at select airports, it has not analyzed whether the select assessments reflect the overall vulnerability of airport security nationwide. Further, TSA has not yet assessed the consequences of an attack against airport perimeter and access control security. According to the NIPP, risk assessments are to be documented, reproducible (so that others can verify the results), defensible (technically sound and free of significant errors), and complete. The NIPP maintains that these qualities are necessary to risk assessments so they can be used to support national-level, comparative risk assessment, planning, and resource prioritization. For a risk assessment to be considered complete, the NIPP states that it must specifically assess threat, vulnerability, and consequence; after these three components have been assessed, they are to be combined to produce a risk estimate. According to the NIPP, comprehensive risk assessments are necessary for determining which assets or systems face the highest risk for prioritizing risk mitigation efforts and the allocation of resources and for effectively measuring how security programs reduce risks. In March 2009 we reported that a lack of information that fully depicts threats, vulnerabilities, and consequences limits an organization’s ability to establish priorities and make cost-effective security measure decisions. TSA officials told us that they have not completed a comprehensive risk assessment for airport security, although they said that they have prepared and are currently reviewing a draft of a comprehensive, scenario-based air domain risk assessment (ADRA), which officials said is to serve as a comprehensive risk assessment for airport security. According to officials, the ADRA is to address all three elements of risk for domestic commercial aviation, general aviation, and air cargo. However, TSA has not released it as originally planned for in February 2008. As of May 2009 TSA officials had not provided revised dates for when the agency expects to finalize the ADRA, and they could not provide documentation to demonstrate to what extent the ADRA will address all three components of risk for airport perimeter and access control security. As a result, it is not clear whether the ADRA will provide the risk analysis needed to inform TSA’s decisions and planning for airport perimeter and access control security. Standard practices in program management call for documenting the scope of the program and milestones (i.e., time frames) to ensure results are achieved. Conducting a comprehensive risk assessment for airport security and documenting milestones for its implementation would help ensure that TSA’s intended actions will be implemented, and would allow TSA to more confidently ensure that its investments in airport security are risk informed and allocated toward the highest-priority risks. A threat assessment is the identification and evaluation of adverse events that can harm or damage an asset. TSA uses several products to identify and assess potential threats to airport security, such as daily intelligence briefings, weekly suspicious incident reports, and situational awareness reports, all of which are available to internal and external stakeholders. TSA also issues an annual threat assessment of the U.S. civil aviation system, which includes an assessment of threats to airport perimeter and access control security. According to TSA officials, these products collectively form TSA’s assessment of threats to airport perimeter and access control security. TSA’s 2008 Civil Aviation Threat Assessment cites four potential threats related to perimeter and access control security, one of which is the threat from insiders—airport workers with authorized access to secured areas. The 2008 assessment characterized the insider threat as “one of the greatest threats to aviation,” which TSA officials explained is meant to reflect the opportunity insiders have to do damage, as well as the vulnerability of commercial airports to an insider attack, which these officials stated as being very high. As of May 2009, TSA had no knowledge of a specific plot by terrorists or others to breach the security of any domestic commercial airport. However, TSA has also noted that airports are seen as more accessible targets than aircraft, and that airport perimeters may become more desirable targets as terrorists look for new ways to circumvent aviation security. Intelligence is necessary to inform threat assessments. As we reported in March 2009, TSA has not clarified the levels of uncertainty—or varying levels of confidence—associated with the intelligence information it has used to identify threats to the transportation sector and guide its planning and investment decisions. Both Congress and the administration have recognized uncertainty inherent in intelligence analysis, and have required analytic products within the intelligence community to properly caveat and express uncertainties or confidence in resulting conclusions or judgments. As a result, the intelligence community and the Department of Defense have adopted this practice in reporting threat intelligence. Since TSA does not assign confidence levels to its analytic judgments, it is difficult for TSA to correctly prioritize its tactics and investments based on uncertain intelligence. In March 2009 we recommended that TSA work with the Director of National Intelligence to determine the best approach for assigning uncertainty or confidence levels to analytic intelligence products and apply this approach. TSA agreed with this recommendation and said that it has begun taking action to address it. The NIPP requires that a risk assessment include a comprehensive assessment of vulnerabilities in assets or systems, such as a physical design feature or type of location, that make them susceptible to a terrorist attack. As we reported in June 2004, these assessments are intended to facilitate airport operators’ efforts to comprehensively identify and effectively address perimeter and access control security weaknesses. TSA officials told us that their primary measures for assessing the vulnerability of commercial airports to attack are the collective results of joint vulnerability assessments (JVA) and professional judgment. TSA officials said that the agency plans to expand the number of JVAs conducted in the future but, as of May 2009, did not have a plan for doing so. According to TSA officials, JVAs are assessments that teams of TSA special agents and other officials conduct jointly with the Federal Bureau of Investigation (FBI) and, as required by law, are generally conducted every 3 years for airports identified as high risk. In response to our 2004 recommendation that TSA establish a schedule and analytical approach for completing vulnerability assessments for evaluating airport security, TSA developed criteria to select and prioritize airports as high-risk for assessment. TSA officials stated that in addition to assessing airports identified as high risk, the agency has also assessed the vulnerability of other airports at the request of FSDs. According to TSA’s TS-SSP, after focusing initially on airports deemed high risk, JVAs are to be conducted at all commercial airports. TSA officials stated that JVA teams assess all aspects of airport security and operations, including fuel, cargo, catering, general aviation, terminal area and law enforcement operations, and the controls that limit access to secured areas and the integrity of the airport perimeter. However, officials emphasized that a JVA is not intended to be a review of an airport’s compliance with security requirements and teams do not impose penalties for noncompliance. From fiscal years 2004 through 2008, TSA conducted 67 JVAs at a total of 57 airports—about 13 percent of the approximately 450 commercial airports nationwide. In 2007 TSA officials conducted a preliminary analysis of the results of JVAs conducted at 23 domestic airports during fiscal years 2004 and 2005, and found 6 areas in which 20 percent or more of the airports assessed were identified as vulnerable. Specific vulnerabilities included the absence of blast resistant glass in terminal windows, lack of bollards/barriers in front of terminals, lack of blast resistant trash receptacles, and insufficient electronic surveillance of perimeter lines and access points. As of May 2009 TSA officials said that the agency had not finalized this analysis and, as of that date, did not have plans to do so. TSA officials also told us that they have shared the results of JVA reports with TSA’s Office of Security Technology to prioritize the distribution of relevant technology to those airports with vulnerabilities that these technologies could strengthen. TSA characterizes U.S. airports as a system of interdependent hubs and links (spokes) in which the security of all is affected or disrupted by the security of the weakest one. The interdependent nature of the system necessitates that TSA protect the overall system as well as individual assets. TSA maintains that such a “systems-based approach” allows it to focus resources on reducing risks across the entire system while maintaining cost-effectiveness and efficiency. TSA officials could not explain to what extent the collective JVAs of specific airports constitute a reasonable systems-based assessment of vulnerability across airports nationwide or whether the agency has considered assessing vulnerabilities across all airports. Although TSA has conducted JVAs at each category of airport, 58 of the 67 were at the largest airports. According to TSA data, 87 percent of commercial airports—most of the smaller Category II, III, and IV airports—have not received a JVA. TSA officials said that because they have not conducted JVAs for these airports, they do not know how vulnerable they are to an intentional security breach. In 2004 we reported that TSA intended to compile baseline data on airport security vulnerabilities to enable it to conduct a systematic analysis of airport security vulnerabilities nationwide. At that time TSA officials told us that such analysis was essential since it would allow the agency to determine the adequacy of security policies and help TSA and airport operators better direct limited resources. According to TSA officials, conducting JVAs at all airports would allow them to compile national baseline data on perimeter and access control security vulnerabilities. As of May 2009, however, TSA officials had not yet completed a nationwide vulnerability assessment, evaluated whether the current approach to JVAs would provide the desired systems-based approach to assessing airport security vulnerabilities, or explained why a nationwide assessment or evaluation has not been conducted. In subsequent discussions, TSA officials told us that based on our review they intend to increase the number of JVAs conducted at airports that are not categorized as high risk—primarily Category II, III, and IV airports. According to officials, the resulting data are to assist TSA in prioritizing the allocation of limited resources. However, TSA officials could not tell us how many additional airports they plan to assess in total or within each category, the analytical approach and time frames for conducting these assessments, and to what extent these additional assessments, in combination with past JVAs, will constitute a reasonable systems-based assessment of vulnerability across airports nationwide. Standard practices for program management call for establishing a management plan and milestones to meet stated objectives and achieve results. It is also unclear to what extent the ADRA, when it is completed, will represent a systems-based vulnerability assessment, an assessment of airports nationwide, or both. Given that TSA officials believe that the vulnerability of airports to an insider attack is very high and the security of airports is interconnected, this vulnerability would extend throughout the nationwide system of airports. Evaluating the extent to which the agency’s current approach assesses systems-based vulnerabilities, including the vulnerabilities of smaller airports, would better position TSA to provide reasonable assurance that it is identifying and addressing the areas of greatest vulnerability and the spectrum of vulnerability across the entire airport system. Further, should TSA decide to conduct a nationwide assessment of airport vulnerability, developing a plan that includes milestones for completing the assessment would help TSA ensure that it takes the necessary actions to accomplish desired objectives within reasonable time frames. According to the NIPP, DHS and lead security agencies, such as TSA, are to seek to use information from the risk assessments of security partners, whenever possible, to contribute to an understanding of sector and national risks. Moreover, the NIPP states that DHS and lead agencies are to work together to assist security partners in providing vulnerability assessment tools that may be used as part of self-assessment processes, and provide recommendations regarding the frequency of assessments, particularly in light of emergent threats. According to the NIPP, stakeholder vulnerability assessments may serve as a basis for developing common vulnerability reports that can help identify strategic needs and more fully investigate interdependencies. However, TSA officials could not explain to what extent they make use of relevant vulnerability assessments conducted independently by airport operators to contribute to the agency’s understanding of airport security risks, or have worked with security partners to help ensure that tools are available for airports to conduct self-assessment processes of vulnerability. Officials from two prominent airport industry associations estimated that the majority of airports, particularly larger airports, have conducted vulnerability assessments, although they could not give us a specific number. In addition, officials from 8 of the 10 airports whom we interviewed on this issue told us that their airports had conducted vulnerability assessment activities. Some of these analyses could be useful to TSA in conducting a systematic analysis of airport security vulnerabilities nationwide. By taking advantage, to the extent possible, of existing vulnerability assessment activities conducted by airport operators, TSA could enrich its understanding of airport security vulnerabilities and therefore better inform federal actions for reducing airport vulnerabilities. According to TSA officials, the agency has not assessed the consequences of a successful attack against airport perimeters or a breach to secured areas within airports, even though the NIPP asserts that the potential consequence of an incident is the first factor to be considered in developing a risk assessment. According to the NIPP, risk assessments should include consequence assessments that evaluate negative effects to public health and safety, the economy, public confidence in national economic and political institutions, and the functioning of government that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack. Although TSA officials agree that a consequence assessment for airport security is needed, and have stated that the ADRA is intended to provide a comprehensive consequence assessment based on risk scenarios, the agency has not provided additional details as to what the assessment will include, the extent to which it will assess consequence for airport security, or when it will be completed. Standard management practices call for documenting milestones (i.e., time frames) to ensure that results are achieved. TSA officials have agreed that a consequence assessment for airport perimeter and access controls security is an important element in assessing risk to airport security. In addition, TSA officials commented that although the immediate consequences of a breach of airport security would likely be limited, such an event could be the first step in a more significant attack against an airport terminal or aircraft, or an attempt to use an aircraft as a weapon. Conducting a consequence assessment could help TSA in developing a comprehensive risk assessment and increase its assurance that the resulting steps it takes to strengthen airport security will more effectively reduce risk and mitigate the consequences of an attack on individual airports and the aviation system as a whole. TSA has implemented a variety of programs and protective actions to strengthen airport security, from additional worker screening to assessing different technologies. For example, consistent with the Explanatory Statement, TSA piloted several methods to screen workers accessing secured areas, but clear conclusions could not be drawn because of significant design limitations, and TSA did not develop or document an evaluation plan to guide design and implementation of the pilot. Further, while TSA has strengthened other worker security programs, assessed various technologies, and added to programs aimed at improving general airport security, certain issues, such as whether security technologies meet airport needs, have not been fully resolved. TSA has taken a variety of protective actions to improve and strengthen the security of commercial airports through the development of new programs or by enhancing existing efforts. Since we last reported on airport perimeter and access control security in June 2004, TSA has implemented efforts to strengthen worker screening and security programs, improve access control technology, and enhance general airport security by providing an additional security presence at airports. According to TSA, each of its security actions—or layers—is capable of stopping a terrorist attack, but when used in combination (what TSA calls a layered approach), a much stronger system results. To better address the risks posed by airport workers, TSA, in accordance with the Explanatory Statement accompanying the DHS Appropriations Act, 2008, initiated a worker screening pilot program to assess various types of screening methods for airport workers. TSA also implemented a random worker screening program and is currently working to apply its screening procedures consistently across airports. In addition, TSA has expanded its requirements for conducting worker background checks. TSA has also taken steps, such as implementing two pilot programs, to identify and assess technologies to strengthen the security of airport perimeters and access controls to secured areas. Further, TSA has taken steps to strengthen general airport security processes. For example, TSA has developed a program in which teams of TSA officials, law enforcement officers, and airport officials temporarily augment airport security through various actions such as randomly inspecting workers, property, and vehicles and patrolling secured areas. Table 1 lists the actions TSA has taken since 2004 to strengthen airport security. From May through July 2008 TSA piloted a program to screen 100 percent of workers at three airports and to test a variety of enhanced screening methods at four other airports. (See app. V for more detailed information on the pilot program, including locations and types of screening methods used.) According to TSA, the objective of the pilot was to compare 100 percent worker screening and enhanced random worker screening based on (1) screening effectiveness, (2) impact on airport operations, and (3) cost considerations. TSA officials hired a contractor—HSI, a federally funded research and development center—to assist with the design, implementation, and evaluation of the data collected. In July 2009 TSA released a report on the results of the pilot program, which included HSI’s findings. HSI concluded that random screening is a more cost-effective approach because it appears “roughly” as effective in identifying contraband items—or items of interest—at less cost than 100 percent worker screening. However, HSI also emphasized that the pilot program “was not a robust experiment” because of limitations in the design and evaluation, such as the limited number of participating airports, which led HSI to identify uncertainties in the results. Given the significance of these limitations, we believe that it is unclear whether random worker screening is more or less cost-effective than 100 percent worker screening. Specifically, HSI identified what we believe to be significant limitations related to the design of the pilot program and the estimation of costs and operational effects. Limitations related to program design include (1) a limited number of participating airports, (2) the short duration of screening operations (generally 90 days), (3) the variety of screening techniques applied, (4) the lack of a baseline, and (5) limited evaluation of enhanced methods. For example, HSI noted that while two of the seven pilot airports performed complete 100 percent worker screening, neither was a Category X airport; a third airport—a Category X—performed 100 percent screening at certain locations for limited durations. HSI also reported that the other four pilot airports used a range of tools and screening techniques—magnetometers, handheld metal detectors, pat- downs—which reduced its ability to assess in great detail any one screening process common to all the pilot airports. In addition, HSI cited issues regarding the use of baseline data for comparison of screening methods. HSI attempted to use previous Aviation Direct Access Screening Program (ADASP) screening data for comparison, but these data were not always comparable in terms of how the screening was conducted. In addition, HSI identified a significant limitation in generalizing pilot program results across airports nationwide, given the limited number and diversity of the pilot airports. HSI noted that because these airports were chosen based on geographic diversity and size, other unique airport factors that might affect worker screening operations—such as workforce size and the number and location of access points—may not have been considered. HSI also recognized what we believe to be significant limitations in the development of estimates of the costs and operational effects of implementing 100 percent worker screening and random worker screening nationwide. HSI’s characterization of its cost estimates as “rough order of magnitude”—or imprecise—underscores the challenge of estimating costs for the entire airport system in the absence of detailed data on individual airports nationwide and in light of the limited amount of information gleaned from the pilot on operational effects and other costs. HSI noted that the cost estimates do not include costs associated with operational effects, such as longer wait times for workers, and potentially costly infrastructure modifications, such as construction of roads and shelters to accommodate vehicle screening. HSI developed high- and low-cost estimates based on current and optimal numbers of airport access points and the amount of resources (personnel, space, and equipment) needed to conduct 100 percent and random worker screening. According to these estimates, the direct cost—including personnel, equipment, and other operation needs—of implementing 100 percent worker screening would range from $5.7 billion to $14.9 billion for the first year, while the direct costs of implementing enhanced random worker screening would range from $1.8 billion to $6.6 billion. HSI noted that the random worker screening methods applied in the worker screening pilot program were a “significant step” beyond TSA’s ongoing worker screening program—ADASP—which the agency characterizes as a “random” worker screening program. For the four pilot airports that applied random screening methods, TSA and airport associations agreed to screen a targeted 20 percent of workers who entered secured areas each day. TSA officials also told us that this 20 percent threshold was significantly higher than that applied through ADASP, although officials said that they do not track the percentage of screening events processed through ADASP. TSA officials told us that they do not have sufficient resources to track this information. In addition to the limitations recognized by HSI, TSA and HSI did not document key aspects of the design and implementation of the pilot program. For example, while they did develop and document a data collection plan that outlined the data requirements, sources, and collection methods to be followed by the seven pilot airports in order to evaluate the program’s costs, benefits, and impacts, they did not document a plan for how such data would be analyzed to formulate results. Standards for Internal Control for the Federal Government states that significant events are to be clearly documented and the documentation should be readily available for examination to inform management decisions. In addition, in November 2008, based in part on our guide for designing evaluations, we reported that pilot programs can more effectively inform future program rollout when an evaluation plan is developed to guide consistent implementation of the pilot and analysis of the results. At minimum, a well-developed, sound evaluation plan contains several key elements, including measurable objectives, standards for pilot performance, a clearly articulated methodology, detailed data collection methods, and a detailed data analysis plan. Incorporating these elements can help ensure that the implementation of a pilot generates performance information needed to make effective management decisions. While TSA and HSI completed a data collection plan, and generally defined specific measurable objectives for the pilot program, they did not address other key elements that collectively could have strengthened the effectiveness of the pilot program and the usefulness of the results: Performance standards. TSA and HSI did not develop and document criteria or standards for determining pilot program performance, which are necessary for determining to what extent the pilot program is effective. Clearly articulated evaluation methodology. TSA and HSI did not fully articulate and document the methodology for evaluating the pilot program. Such a methodology is to include plans for sound sampling methods, appropriate sample sizes, and comparing the pilot results with ongoing efforts. TSA and HSI documented relevant elements, such as certain sampling methods and sample sizes, in both its overall data collection plan for the program and in individual pilot operations plans for each airport implementing the pilot. However, while officials stated that the seven airports were selected to obtain a range of physical size, worker volume, and geographical dispersion information, they did not document the criteria they used in this process, and could not explain the rationale used to decide which screening methods would be piloted by the individual airports. Because the seven airports tested different screening methods, there were differences in the design of the individual pilots as well as in the type and frequency of the data collected. While design differences are to be expected given that the pilot program was testing disparate screening methods, there were discrepancies in the plans that limited HSI’s ability to compare methods across sites. For example, those airports that tested enhanced screening methods—as opposed to 100 percent worker screening—used different rationales to determine how many inspections would be conducted each day. TSA officials said that this issue and other discrepancies and points of confusion were addressed through oral briefings with the pilot airports, but said that they did not provide additional written instructions to the airports responsible for conducting the pilots. TSA and HSI officials also did not document how they would address deviations from the piloted methods, such as workers who avoided the piloted screening by accessing alternative entry points, or suspension of the pilot because of excessive wait times for workers or passengers (some workers were screened through passenger screening checkpoints). Further, TSA and HSI officials did not develop and document a plan for comparing the results of the piloted worker screening methods with TSA’s ongoing random worker screening program to determine whether the piloted methods had a greater impact on reducing insider risk than ongoing screening efforts. Detailed data analysis. Although the agreement between TSA and HSI also called for the development of a data analysis plan, neither HSI nor TSA developed an analysis plan to describe how the collected data would be used to track the program’s performance and evaluate the effectiveness of the piloted screening methods, including 100 percent worker screening. For example, HSI used the number of confiscated items as a means of comparing the relative effectiveness of each screening method. However, HSI reported that the number of items confiscated during pilot operations was “very low” at most pilot airports, and some did not detect any. Based on these data, HSI concluded that random worker screening appeared to be “roughly” as effective in identifying confiscated items as 100 percent worker screening. However, it is possible that there were few or no contraband items to detect, as workers at the pilot airports were warned in advance when the piloted screening methods would be in effect and disclosure signs were posted at access points. As a result, comparing the very low rate—and in some cases, nonexistence—of confiscated items across pilots, coupled with the short assessment period, may not fully indicate the effectiveness of different screening methods at different airports. If a data analysis plan had been developed during pilot design, it could have been used to explain how such data would be analyzed, including how HSI’s analysis of the pilots’ effectiveness accounted for the low confiscation rates. Because of the significance of the pilot program limitations reported by HSI, as well as the lack of documentation and detailed information regarding the evaluation of the program, the reliability of the resulting data and any subsequent conclusions about the potential impacts, costs, benefits, and effectiveness of 100 percent worker screening and other screening methods cannot be verified. For these reasons, it would not be prudent to base major policy decisions regarding worker screening solely on the results of the pilot program. HSI reported that the wide variation— such as size, traffic flow, and design—of U.S. commercial airports makes it difficult to generalize the seven pilot results to all commercial airports. While we agree it is difficult to generalize the results of such a small sample to an entire population, a well-documented and sound evaluation plan could have helped ensure that the pilot program generated the data and performance information needed to draw reasonable conclusions about the effectiveness of 100 percent worker screening and other methods to inform nationwide implementation. Incorporating these elements into an evaluation plan when designing future pilots could help ensure that TSA’s pilots generate the necessary data for making management decisions and that TSA can demonstrate that the results are reliable. According to TSA officials, FSDs and others in the aviation community have long recognized the potential for insiders to do harm from within an airport. TSA officials said that they developed ADASP—a random worker screening program—to counteract the potential vulnerability of airports to an insider attack. According to TSA officials, ADASP serves as an additional layer of security and as a deterrent to workers who seek to smuggle drugs or weapons or to do harm. According to senior TSA officials, FSDs decide when and how to implement ADASP, including the random screening of passengers at the boarding gate or workers at SIDA access points to the sterile area. TSA officials said that ADASP was initially developed as a pilot project at one airport in March 2005 to deter workers from breaching access controls and procedures for secured areas at that particular airport. According to officials, after concluding that the pilot was successful in deterring airport workers from bringing restricted items into secured areas, TSA began implementing ADASP on a nationwide voluntary basis in August 2006 using existing resources. In March 2007, in response to several incidents of insider criminal activity, TSA directed that ADASP be conducted at all commercial airports nationwide. For example, on March 5, 2007, two airline employees smuggled 14 firearms and 8 pounds of marijuana on board a commercial airplane at Orlando International Airport (based on information received through an anonymous tip, the contraband was confiscated when the plane landed in San Juan, Puerto Rico). In its October 2008 report, the DHS Office of the Inspector General (OIG) found that ADASP was being implemented in a manner that allowed workers to avoid being screened, and that the program had been applied inconsistently across airports. For example, at most of the seven airports the DHS OIG visited, ADASP screening stations were set up in front of worker access points, which allowed workers to identify that ADASP was being implemented and potentially choose another entry and avoid being screened. However, at another airport, the screening location was set up behind the access point, which prevented workers from avoiding being screened. ADASP standard operating procedures allow ADASP screening locations to be set up in front of or behind direct access points as long as there is signage alerting workers that ADASP screening is taking place. However, the DHS OIG found that the location of the screening stations— either in front of or behind direct access points—affected whether posted signs were visible to workers. The DHS OIG recommended that TSA apply consistent ADASP policies and procedures at all airports, and establish an ADASP working group to consider policy and procedure changes based on an accumulation of best practices across the country. TSA agreed with the DHS OIG’s recommendations, and officials stated that they have begun to take action to address them. Since April 2004, and in response to our prior recommendation, TSA has taken steps to enhance airport worker background checks. TSA background checks are composed of security threat assessments (STA), which are name-based records checks against various terrorist watch lists, and criminal history record checks (CHRC), which are fingerprint-based criminal records checks. TSA requires airport workers to undergo both STAs and CHRCs before being granted unescorted access to secured areas in which they perform their duties. In July 2004 TSA expanded STA requirements by requiring workers in certain secured areas to submit current biographical information, such as date of birth. TSA further augmented STAs in 2005 to include a citizenship check to identify individuals who may be subject to coercion because of their immigration status or who may otherwise pose a threat to transportation security. In 2007 TSA expanded STA requirements beyond workers with sterile area or SIDA access to apply to all individuals seeking or holding airport-issued identification badges or credentials. Finally, in June 2009 TSA began requiring airport operators to renew all airport identification media every 2 years, deactivate expired media and require workers to resubmit biographical information in the event of certain changes, and expand the STA requirement to include individuals with unescorted access to the AOA, among other things. TSA has taken steps to strengthen its background check requirements and is considering additional actions to address certain statutory requirements and issues that we identified in 2004. For example, TSA is considering revising its regulation listing the offenses that if a conviction occurred within 10 years of applying for this access, would disqualify a person from receiving unescorted access to secured areas. TSA officials told us that TSA and industry stakeholders are considering whether some disqualifying offenses may warrant a lifelong ban. In addition, while TSA has not yet specifically addressed a statutory provision requiring TSA to require, by regulation, that individuals with regularly escorted access to secured airport areas undergo background checks, TSA officials told us that they believe the agency’s existing measures address the potential risk presented by such workers. They also said that it would be challenging to identify the population of workers who require regularly escorted access because such individuals—for example, construction workers—enter airports on an infrequent and unpredictable basis. Since 2004, TSA has taken some steps to develop biometric worker credentialing; however, it is unclear to what extent TSA plans to address statutory requirements regarding biometric technology, such as developing or requiring biometric access controls at commercial airports in consultation with industry stakeholders. For instance, in October 2008 the DHS OIG reported that TSA planned to mandate phased-in biometric upgrades for all airport access control systems to meet certain specifications. However, as of May 2009, according to TSA officials, the agency had not made a final decision on whether to require airports to implement biometric access controls, but it intends to pursue a combination of rule making and other measures to encourage airports to voluntarily implement biometric credentials and control systems. While TSA officials said that the agency issued a security directive in December 2008 that encourages airports to implement biometric access control systems that are aligned with existing federal identification standards, TSA officials also reported the need to ensure that airports incorporate up- to-date standards. These officials also said that TSA is considering establishing minimum requirements to ensure consistency in data collection, card information configuration, and biometric information. Airport operators and industry association officials have called for a consensus-based approach to developing biometric technology standards for airports, and have stressed the need for standards that allow for flexibility and consider the significant investment some airports have already made in biometric technology. Airport operators have also expressed a reluctance to move forward with individual biometric projects because of concerns that their enhancements will not conform to future federal standards. Although TSA has not decided whether it will mandate biometric credentials and access controls at airports, it has taken steps to assess and develop such technology in response to stakeholder concerns and statutory requirements. For example, TSA officials said the agency has assisted the aviation industry and RTCA, Inc., a federal aviation advisory committee, in developing recommended security standards for biometric access controls, which officials said provide guidelines for acquiring, designing, and implementing access control systems. TSA officials also noted that the agency has cooperated with the Biometric Airport Security Identification Consortium, or BASIC—a working group of airport operators and aviation association representatives—which has developed guidance on key principles that it believes should be part of any future biometric credential and access control system. In addition, TSA is in the early stages of developing the Aviation Credential Interoperability Solution (ACIS) program. ACIS is conceived as a credentialing system in which airports use biometrics to verify the identities and privileges of workers who have airport- or air carrier–issued identification badges before granting them entry to secured areas. According to TSA, ACIS would provide a trusted biometric credential based on smart card technology (about the size of a credit card, using circuit chips to store and process data) and specific industry standards, and establish standard airport processes for enrollment, card issuance, vetting, and the management of credentials. Although these processes would be standardized nationwide, airports would still be individually responsible for determining access authority. According to TSA officials, the agency is seeking to build ACIS on much of the airports’ existing infrastructure and systems and has asked industry stakeholders for input on key considerations, including the population of workers who would receive the credential, program policies, process, technology considerations, operational impacts, and concerns regarding ACIS. However, as of May 2009, TSA officials could not explain the status of ACIS or provide additional information on the possible implementation of the program since the agency released the specifications for industry comment in April 2008. As a result, it is unclear when and how the agency plans to address the requirements of the Intelligence Reform and Terrorism Prevention Act, including establishing minimum standards for biometric systems and determining the best way to incorporate these decisions into airports’ existing practices and systems. As of May 2009 TSA officials had not provided any further information, such as scheduled milestones, on TSA’s plans to implement biometric technology at airports. Standard practices in program management suggest that developing scheduled milestones can help define the scope of the project, achieve key deliverables, and communicate with key stakeholders. In addition, until TSA communicates its decision on whether it plans to mandate—such as through a rule making—or collaboratively implement biometric access controls at airports, and what approach is best—be it ACIS or another system—operators may be hesitant to upgrade airport security in this area. As we reported in 2004, airport operators do not want to run the risk of installing costly technology that may not comply with future TSA requirements and standards. Developing milestones for implementing a biometric system could help ensure that TSA addresses statutory requirements. In addition, such milestones will provide airports and the aviation industry with the scheduling information needed to plan future security improvements and expenditures. In addition to biometric technology efforts, TSA has also initiated efforts to assess other airport perimeter and access control technology. Pursuant to ATSA, TSA established two pilot programs to assess perimeter and access control security technology, the Airport Access Control Pilot Program (AACPP) in 2004 and the Airport Perimeter Security (APS) pilot program in 2006. AACPP piloted various new and emerging airport security technologies, including biometrics. TSA issued the final report on AACPP in December 2006, but did not recommend any of the piloted technologies for full-scale implementation. TSA officials said that a second round of pilot projects would be necessary to allow time for project evaluation and limited deployments, but as of May 2009 TSA officials said that details for this second round were still being finalized. The purpose of the APS pilot, according to TSA officials, is to identify and mitigate existing airport perimeter security vulnerabilities using commercially available technology. APS was originally scheduled to be completed in December 2007, but according to TSA officials, though five of the six pilot projects have been completed, the remaining pilot has been delayed because of problems with the acquisition process. According to TSA officials, the final pilot project is to be completed by October 2009. TSA officials told us that the agency has also taken steps to provide some technical and financial support to small- and medium-sized airports through AACPP and the APS pilot program, as both tested technologies that could be suitable for airports of these sizes. TSA officials also stated that smaller airports could potentially benefit from the agency’s efforts to test the Virtual Perimeter Monitoring System, which was developed by the U.S. Navy and is being installed and evaluated at four small airports. Further, officials noted that TSA has also provided significant funding to support cooperative agreements for the deployment of law enforcement officers at airports—including Category II, III, and IV airports—to help defray security costs. However, according to TSA officials, as of May 2009 TSA had not yet developed a plan, or a time frame for developing a plan, to provide technical information and funding to small- and medium-sized airports, as required by ATSA. According to TSA officials, funds had not been appropriated or specifically directed to develop such a plan, and TSA’s resources and management attention have been focused on other statutory requirements for which it has more direct responsibility and deadlines, including passenger and baggage screening requirements. (For a summary of TSA actions to address certain statutory requirements for airport security technology, see app. II.) TSA has taken actions to improve general airport security by establishing programs and requirements. For example, TSA has augmented access control screening and general airport security by increasing the presence of transportation security officers and law enforcement officials through the Screening of Passengers by Observation Techniques (SPOT) program and the Law Enforcement Officer Reimbursement Program. In addition, it uses the Visible Intermodal Prevention and Response (VIPR) program, which is used across the transportation sector, to augment airport security efforts. (For more information on these TSA programs, see app. VI.) TSA uses a variety of regulatory mechanisms for imposing requirements within the transportation sector. In the aviation environment, TSA uses the security directive as one of its regulatory tools for imposing requirements to strengthen the security of civil aviation, including security at the nation’s commercial airports. Pursuant to TSA regulation, the agency may decide to use security directives to impose requirements on airport operators if, for example, it determines that additional security measures are needed to respond to general or specific threats against the civil aviation system. As of March 2009 TSA identified 25 security directives or emergency amendments in effect that related to various aspects of airport perimeter and access control security. As shown in table 2, TSA imposed requirements through security directives that address areas such as worker and vehicle screening, criminal history record checks, and law enforcement officer deployments. According to TSA officials, security directives enable the agency to respond rapidly to immediate or imminent threats and provide the agency with flexibility in how it imposes requirements on airport operators. This function is especially relevant given the adaptive, dynamic nature of the terrorist threat. Moreover, according to TSA, imposing requirements through security directives is less time consuming than other processes, such as the lengthier notice-and-comment rule making process, which generally provides opportunity for more stakeholder input, requires cost- benefit analysis, and provides the regulated entities with more notice before implementation and enforcement. Officials from two prominent aviation associations and eight of nine airports we visited identified concerns regarding requirements established through security directive: Officials from the two aviation associations noted inconsistencies between requirements established through separate security directives. For example, they noted that the requirements for airport-issued identification badges are different from those for badges issued by an air carrier. Workers employed by the airport, air carrier, or other entities who apply for an airport identification badge granting unescorted access to a secured area are required to undergo an immigration and citizenship status check, whereas workers who apply through an air carrier, which can grant similar unescorted access rights, are not. Both airport and air carrier workers can apply to an airport operator for airport-issued identification badges, but only air carrier workers can apply to their aircraft operator (employer) for an air carrier–issued identification badge. TSA officials told us that the agency plans to address this inconsistency—which has been in effect since December 2002—and is working on a time frame for doing so. Airport operator officials from eight of the nine airports we visited and officials from two industry associations expressed concern that requirements established through security directives related to airport security are often issued for an indefinite time period. Our review of 25 airport security directives and emergency amendments showed that all except one were issued with no expiration date. The two aviation industry associations have expressed concerns directly to TSA that security directive requirements should be temporary and include expiration dates so that they can be periodically reviewed for relevancy. According to senior officials, TSA does not have internal control procedures for monitoring and coordinating requirements established through security directives related to airport perimeter and access control security. In November 2008 TSA officials told us that the agency had drafted an operations directive that documents procedures for developing, coordinating, issuing, and monitoring civil aviation security directives. According to officials, this operations directive also is to identify procedures for conducting periodic reviews of requirements imposed through security directives. However, while TSA officials told us that they initially planned to issue the operations directive in April 2009, in May 2009 they said that they were in the process of adopting the recommendations of an internal team commissioned to review and identify improvements to TSA’s policy review process, including the proposed operations directive. In addition, as of May 2009, officials did not have an expected date for finalizing the directive. TSA officials explained that because the review team’s recommendations will require organizational changes and upgrades to TSA’s information technology infrastructure, it will take a significant amount of time before an approved directive can be issued. As a result, it is unclear to what extent the operations directive will address concerns expressed by aviation operators and industry stakeholders. Standard practices in program management call for documented milestones to ensure that results are achieved. Establishing milestones for implementing guidance to periodically review airport security requirements imposed through security directives would help TSA formalize review of these directives within a time frame authorized by management. In addition to the stakeholder issues previously discussed, representatives from two prominent aviation industry associations have expressed concern that TSA has not issued security directives in accordance with the law. Specifically, these representatives noted that the Transportation Security Oversight Board (TSOB) has not reviewed TSA’s airport perimeter and access control security directives in accordance with a provision set forth in ATSA. This provision, as amended, establishes emergency procedures by which TSA may immediately issue a regulation or security directive to protect transportation security, and provides that any such regulation or security directive is subject to review by the TSOB. The provision further states that any regulation or security directive issued pursuant to this authority may remain in effect for a period not to exceed 90 days unless ratified or disapproved by the TSOB. According to TSA officials, the agency has not issued security directives related to airport perimeter and access control security under this emergency authority. Rather, officials explained, the agency has issued such security directives (and all aviation-related security directives) in accordance with its aviation security regulations governing airport and aircraft operators, which predate ATSA and the establishment of TSA. FAA implemented regulations—promulgated through the notice-and- comment rule making process—establishing FAA’s authority to issue security directives to impose requirements on U.S. airport and aircraft operators. With the establishment of TSA, FAA’s authority to regulate civil aviation security, including its authority to issue security directives, transferred to the new agency. TSA does not consider ATSA to have altered this existing authority. Although TSA has developed a variety of individual protective actions to mitigate identified airport security risks, it has not developed a unified national strategy aimed at enhancing airport perimeter and access control security. Through our prior work on national security planning, we have identified characteristics of effective security strategies, several of which are relevant to TSA’s numerous efforts to enhance perimeter and access control security. For example, TSA has not developed goals and objectives for related programs and activities, prioritized protective security actions, or developed performance measures to assess the results of its perimeter and access control security efforts beyond tracking outputs (the level of activity provided over a period of time). Further, although TSA has identified some cost information that is used to inform programmatic decision making, it has not fully assessed the costs and resources necessary to implement its airport security efforts. Finally, TSA has not fully outlined how activities are to be coordinated among stakeholders, integrated with other aviation security priorities, or implemented within the agency. Developing a strategy to accomplish goals and desired outcomes helps organizations manage their programs more effectively and is an essential mechanism to guide progress in achieving desired results. Strategies are the starting point and foundation for defining what an agency seeks to accomplish, and we have reported that effective strategies provide an overarching framework for setting and communicating goals and priorities and allocating resources to inform decision making and help ensure accountability. Moreover, a strategy that outlines security goals, as well as mechanisms and measures to achieve such goals, and that is understood and available to all relevant stakeholders strengthens implementation of and accountability to common principles. A national strategy to guide and integrate the nation’s airport security activities could strengthen decision making and accountability for several reasons. First, TSA has identified airport perimeter and access control security—particularly the mitigation of risks posed by workers who have unescorted access to secured areas—as a top priority. Historically, TSA has recognized the importance of developing strategies for high-priority security programs involving high levels of perceived risk and resources, such as air cargo security and the SPOT program. Second, in security networks that rely on the cooperation of all security partners—in this case TSA, airport operators, and air carriers—strategies can provide a basis for communication and mutual understanding between security partners that is fundamental for such integrated protective programs and activities. In addition, because of the mutually dependent roles that TSA and its security partners have in airport security operations, TSA’s ability to achieve results depends on the ability of all security partners to operate under common procedures and achieve shared security goals. Finally, officials from two prominent industry organizations that represent the majority of the nation’s airport operators said that the industry would significantly benefit from a TSA-led strategy that identified long-term goals for airport perimeter and access control security. In addition to providing a unifying framework, a strategy that clearly identifies milestones, developed in cooperation with industry security partners, could make it easier for airport operators to plan, fund, and implement security enhancements that according to industry officials can require intensive capital improvements. While TSA has taken steps to assess threat and vulnerability related to airport security and developed a variety of protective actions to mitigate risk, TSA has not developed a unifying strategy to guide the development, implementation, and assessment of these varied actions and those of its security partners. TSA officials cited three reasons why the agency has not developed a strategy to guide national efforts to enhance airport security. First, TSA officials cited a lack of congressional emphasis on airport perimeter and access control security relative to other high-risk areas, such as passenger and baggage screening. Second, these officials noted that airport operators, not TSA, have operational responsibility for airport security. Third, they cited a lack of resources and funding. While these issues may present challenges, they should be considered in light of other factors. First, Congress has long recognized the importance of airport security, and has contributed to the establishment of a variety of requirements pertaining to this issue. For example, the appropriations committees, through reports accompanying DHS’s annual appropriations acts, have directed TSA to focus its efforts on enhancing several aspects of airport perimeter and access control security. Moreover, developing a strategy that clearly articulates the risk to airport security and demonstrates how those risks can be addressed through protective actions could help inform decision making. Second, though we recognize that airport operators, not TSA, generally have operational responsibility for airport perimeter and access control security, TSA—as the regulatory authority for airport security and the designated lead agency for transportation security—is responsible for identifying, prioritizing, and coordinating protection efforts within aviation, including those related to airport security. TSA currently exercises this authority by ensuring compliance with TSA-approved airport operator security programs and, pursuant to them, by issuing and ensuring compliance with requirements imposed through security directives or other means. Finally, regarding resource and funding constraints, federal guidelines for strategies and planning include linking program activities and anticipated outcomes with expected program costs. In this regard, a strategy could strengthen decision making to help allocate limited resources to mitigate risk, which is a cornerstone of homeland security policy. Additionally, DHS’s risk management approach recognizes that resources are to be focused on the greatest risks, and on protective activities designed to achieve the biggest reduction in those risks given the limited resources at hand. The NIPP risk management framework provides guidance for agencies to develop strategies and prioritize activities to those ends. A strategy helps to link individual programs to specific performance goals and describe how the programs will contribute to the achievement of those goals. A national strategy could help TSA, airport operators, and industry stakeholders in aligning their activities, processes, and resources to support mission-related outcomes for airport perimeter and access control security, and, as a result, in determining whether their efforts are effective in meeting their goals for airport security. Our previous work has identified that an essential characteristic of effective strategies is the setting of goals, priorities, and performance measures. This characteristic addresses what a strategy is trying to achieve and the steps needed to achieve and measure those results. A strategy can provide a description of an ideal overall outcome, or “end- state,” and link individual programs and activities to specific performance goals, describing how they will contribute to the achievement of the end- state. The prioritization of programs and activities, and the identification of milestones and performance measures, can aid implementing parties in achieving results according to specific time frames, as well as enable effective oversight and accountability. The NIPP also calls for the development of goals, priorities, and performance measures to guide DHS components, including TSA, in achieving a desired end-state. Security goals allow stakeholders to identify the desired outcomes that a security program intends to achieve and that all security partners are to work to attain. Defining goals and desired outcomes, in turn, enables stakeholders to better guide their decision making to develop protective security programs and activities that mitigate risks. The NIPP also states that security goals should be used in the development of specific protective programs and considered for distinct assets and systems. However, according to TSA officials, the agency has not developed goals and objectives for airport security, including specific targets or measures related to the effectiveness of security programs and activities. TSA officials told us that the agency sets goals for aviation security as a whole but has not set goals and objectives for the airport perimeter and access control security area. Developing a baseline set of security goals and objectives that consider, if not reflect, the airport perimeter and access control security environment would help provide TSA and its security partners with the fundamental tools needed to define outcomes for airport perimeter and access control security. Furthermore, a defined outcome that all security partners can work toward will better position TSA to provide reasonable assurance that it is taking the most appropriate steps for ensuring airport security. Our past work has also shown that the identification of program priorities in a strategy aids implementing parties in achieving results, which enables more effective oversight and accountability. Although TSA has implemented protective programs and activities that address risks to airport security, according to TSA officials it has not prioritized these activities nor has it yet aligned them with specific goals and objectives. TSA officials told us that in keeping with legislative mandates, they have focused agency resources on aviation security programs and activities that were of higher priority, such as passenger and baggage screening and air cargo security. Identifying priorities related to airport perimeter and access control security could assist TSA in achieving results within specified time frames and limited resources because it would allow the agency to concentrate on areas of greatest importance. In addition to our past work on national strategies, the NIPP and other federal guidance require agencies to assess whether their efforts are effective in achieving key security goals and objectives so as to help drive future investment and resource decisions and adapt and adjust protective efforts as risks change. Decision makers use performance measurement information, including activity outputs and descriptive information regarding program operations, to identify problems or weaknesses in individual programs, identify factors causing the problems, and modify services or processes to try to address problems. Decision makers can also use performance information collectively, and, according to the NIPP, examine a variety of data to provide a holistic picture of the health and effectiveness of a security approach from which to make security improvements. If significant limitations on performance measures exist, the strategy might address plans to obtain better data or measurements, such as national standards or indicators of preparedness. TSA officials told us that TSA has not fully assessed the effectiveness of its protective activities for airport perimeters and secured areas, but they said that the agency has taken some steps to collect certain performance data for some airport security programs and activities to help inform programmatic decision making. For example, TSA officials told us that they require protective programs, such as ADASP and VIPR, to report certain output data and descriptive program information, which officials use to inform administrative or programmatic decisions. For ADASP, TSA requires FSDs to collect information on, among other things, the number of workers screened, vehicles inspected, and prohibited items surrendered. TSA officials said that they use these descriptive and output data to inform programmatic decisions, such as determining the number of staff days needed to support ADASP operations nationwide. However, TSA was not able to provide documentation on how such analysis has been conducted. For VIPR, officials said that they require team members to complete after-action reports that include data on the number of participants, locations, and types of activities conducted. TSA officials said that they are analyzing and categorizing this descriptive and output information to determine trends and identify areas of success and failure, which they will use to improve future operations, though they did not provide us with examples of how they have done this. TSA officials also told us that they require SPOT to report descriptive operations data and situational report information, which are to be used to assign necessary duties and correct problems with program implementation. However, TSA officials could not tell us how they use these descriptive and output data to inform program development and administrative decisions. While the use of descriptive and output data to inform program development and administration is both appropriate and valuable, leading management practices emphasize that successful performance measurement focuses on assessing the results of individual programs and activities. TSA officials also told us that while they recognize the importance of assessing the effectiveness of airport security programs and activities in reducing known threats, it is difficult to do so because the primary purpose of these activities is deterrence. Assessing the deterrent benefits of a program is inherently challenging because it involves determining what would have happened in the absence of an intervention, or protective action, and it is often difficult to isolate the impact of the individual program on behavior that may be affected by multiple other factors. Because of this difficulty, officials told us that they have instead focused their efforts on assessing the extent to which each airport security activity supports TSA’s overall layered approach to security. We recognize that assessing the effectiveness of deterrence-related activities is challenging and that it continues to be the focus of ongoing analytic effort and policy review. For example, a January 2007 report by the Department of Transportation addressed issues related to measuring deterrence in the maritime sector, and a February 2007 report by the RAND Corporation acknowledged the challenges associated with measuring the benefits of security programs aimed at reducing terrorist risk. However, as a feature of TSA’s layered security approach, many of its airport activities address other aspects of security in addition to deterrence. Like other homeland security efforts, TSA’s airport security activities also seek to limit the potential for attack, safeguard critical infrastructure and property, identify wrongdoing, and ensure an effective and efficient response in the event of an attack; the desired outcome of its efforts is to reduce the risk of an attack. Deterrence is an inherent benefit of any protective action, and methods designed to detect wrongdoing and measures taken to safeguard critical infrastructure and property, for example, also help deter terrorist attacks. There are a number of activities that TSA has implemented that seek to reduce this risk, such as requiring security threat assessments for all airport workers. Some of these activities serve principally to deter, such as ADASP, while others are more focused on safeguarding critical infrastructure and property, such as conducting compliance inspections of aviation security regulations or installing perimeter fencing. Some activities serve multiple purposes, such as VIPR, which seeks to provide a visual deterrent to terrorist or other criminal activity, but also seeks to safeguard critical infrastructure in various modes of transportation. Examining the extent to which its activities have effectively addressed these various purposes would enable TSA to more efficiently implement and manage its programs. There are several methods available that TSA could explore to gain insight on the extent to which its security activities have met their desired purpose and to ultimately improve program performance. For example, TSA could work with stakeholders, such as airport operators and other security partners, to identify and share lessons learned and best practices across airports to better tailor its efforts and resources and continuously improve security. TSA could also use information gathered through covert testing or compliance inspections—such as noncompliance or security breaches—to make adjustments to specific security activities and to identify which aspects require additional investigation. In addition, TSA could develop proxy measures—indirect measures or signs that approximate or represent the direct measure—to show how security efforts correlate to an improved security outcome. Appendix VII provides a complete discussion on these methods, as well as information on other alternatives TSA could explore. Our prior work shows that effective strategies address costs, resources, and resource allocation issues. Specifically, effective strategies address the costs of implementing the individual components of the strategy, the sources and types of resources needed (such as human capital or research and development), and where those resources should be targeted to better balance risk reductions with costs. Effective strategies may also address in greater detail how risk management will aid implementing parties in prioritizing and allocating resources based on expected benefits and costs. Our prior work found that strategies that provide guidance on costs and needed resources help implementing parties better allocate resources according to priorities, track costs and performance, and shift resources as appropriate. Statutory requirements and federal cost accounting standards also stress the benefits of developing and reporting on the cost of federal programs and activities, as well as using that information to more effectively allocate resources and inform program management decisions. TSA has identified the costs and resources it needs for some specific activities and programs that exclusively support airport security, such as JVAs of selected commercial airports. However, for programs that serve airport security as well as other aspects of aviation security, TSA has not identified the costs and resources devoted to airport security. For example, TSA has identified its expenditures for compliance inspections and other airport security–related programs and activities, which collectively totaled nearly $850 million from fiscal years 2004 through 2008. However, TSA has not identified what portion of these funds was directly allocated for airport security activities versus other aviation security activities, such as passenger screening. (For a more detailed discussion of airport security costs, see app. IV.) Further, TSA has not fully identified the resources it needs to mitigate risks to airport perimeter and access control security. According to TSA officials, identifying collective agency costs and resource needs for airport security activities is challenging because airport security is not a separately funded TSA program, and many airport security activities are part of broader security programs. However, without attempting to identify total agency costs, it will be difficult for TSA to identify costs associated with individual security activities, and therefore it will be hindered in determining the resources it needs to sustain desired activity levels and realize targeted results. While TSA officials told us that they are starting to identify costs for airport security activities and plan to complete this effort by the end of 2009, they could provide no additional information to illustrate their approach for doing so. As a result, it is unclear what costs the agency will identify, and to what extent TSA will be able to identify costs for specific security activities in order to identify the resources it needs to sustain desired activity levels and realize targeted results. TSA officials also told us that they have not yet identified or estimated costs to the aviation industry for implementing airport security requirements, such as background checks for their workers, or capital costs—such as construction and equipment—that airport operators incur to enhance the security of their facilities. According to these officials, the agency does not have the resources and funds to collect cost information from airport operators. However, TSA officials could not tell us how and to what extent they had assessed the resources and funds needed to collect this information or whether they had explored other options for collecting cost data, such as working with industry associations to survey airport operators. Estimating general cost information on the types and levels of resources needed for desired outcomes would provide TSA and other stakeholders with valuable information with which to make informed resource and investment decisions, including decisions about future allocation needs, to mitigate risks to airport security. According to our previous work on effective national strategies, as well as NIPP guidance, risk management focuses security efforts on those activities that bring about the greatest reduction in risk given the resources used. According to federal guidance, employing systematic cost-benefit analysis helps ensure that agencies choose the security priorities that most efficiently and effectively mitigate risk for the resources available. The Office of Management and Budget (OMB) cites cost-benefit analysis as one of the key principles to be considered when an agency allocates resources for capital expenditures because it provides decision makers with a clear indication of the most efficient alternative. DHS’s Cost-Benefit Analysis Guidebook also states that cost-benefit analysis identifies the superior financial solution among competing alternatives, and that it is a proven management tool to support planning and managing costs and risks. While TSA has made efforts to consider costs for some airport security programs, it has not used cost-benefit analysis to allocate or prioritize resources toward the most cost-effective alternative actions for mitigating risk. According to TSA officials, certain factors have limited TSA’s ability to conduct cost-benefit analysis, such as resource constraints and the need to take immediate action to address new and emerging security threats. However, officials could not demonstrate that they had attempted to conduct cost-benefit analysis for programs and activities related to airport security within the constraints of current resources, or explain how, or to what extent, they had assessed the resources that would be needed to conduct cost-benefit analysis. Further, TSA officials could not cite a situation in which the need to take immediate action—outside of issuing security directives—in response to a threat prevented them from conducting cost-benefit analysis. TSA officials agreed that conducting cost-benefit analysis is beneficial, but also said that it is not always practical because of the difficulty in quantifying the benefits of deterrence- based activities. Because of this challenge, officials said that they have used professional judgment, past experience, law enforcement principles, and intelligence information to evaluate alternative airport security activities to mitigate risks. While TSA’s approach to identifying security actions includes accepted risk reduction decision-making tools, such as professional judgment, it does not provide a means to fully weigh the benefits versus the costs of implementing alternative actions. However, despite the challenges TSA cited to developing cost-benefit analysis, TSA officials told us that as of January 2009, the agency was in the early stages of investigating costs and benefits related to airport perimeter access control. According to these officials, TSA plans to initially focus on developing cost estimates associated with improving access control, a process the agency expects to complete by the end of 2009. However, because TSA officials did not explain how they expect to identify and estimate these costs and how, in the future, they plan to identify and estimate benefits for alternative actions, especially those actions that focus on deterrence, it is not yet clear to what extent TSA’s efforts will constitute cost-benefit analysis. The use of systematic cost-benefit analysis when considering future airport security measures would help TSA to choose the most cost- effective security options for mitigating risk. We recognize the difficulties in quantifying the benefits of deterrence-based activities, but there are alternatives that TSA could pursue to assess benefits, such as examining the extent to which its activities address other purposes besides deterrence. Moreover, OMB recognizes that in some circumstances—such as when data are insufficient—costs and benefits cannot be quantified, in which case costs and benefits are to be assessed in qualitative terms. By exploring ways to identify expected costs associated with alternatives, and balancing these with estimated security benefits, TSA can more fully ensure that it is efficiently allocating and prioritizing its limited resources, as well as those of individual airports, in a way that maximizes the effectiveness of its airport security efforts. Our prior work shows that effective national strategies address how to coordinate efforts and resolve conflicts among stakeholders, address ways in which each strategy relates to the goals of other strategies, and devise plans for implementing the strategies. Because the responsibility for airport perimeter and access control security involves multiple stakeholders, including federal entities, individual airport operators, air carriers, and industry organizations, coordination among stakeholders is critical. In such an environment, the implementation of security activities is strengthened when a strategy addresses how federal efforts will coordinate and integrate with other federal and private sector initiatives, relate to the goals and objectives of other strategies and plans, and be implemented and coordinated by relevant parties. Representatives from industry associations told us that while TSA has collaborated with industry stakeholders on the development of multiple airport security activities and initiatives, the agency has not always fully coordinated the development and implementation of specific security activities and initiatives. For example, although TSA has worked with the industry in the development of some aspects of airport security technology, such as biometrics, industry association officials told us that the agency has not yet recommended specific technology based on the results of technology-based pilot programs it completed over 2 years ago in 2007. These officials also noted that TSA did not fully coordinate with the industry in its decision to impose stronger requirements on worker credentialing practices in the wake of security incidents at individual airports. TSA officials said that they have worked closely with industry stakeholders in addressing airport security issues, and have established working groups to continue to coordinate on issues such as biometric access control security. Our prior work found that a strategy should provide both direction and guidance to government and private entities so that missions and contributions can be more appropriately coordinated. TSA has not demonstrated how it relates the activities of airport security to the goals, objectives, and activities of TSA’s other aviation security strategies, such as passenger screening, air cargo screening, and baggage screening. In addition, TSA has not identified how these various security areas are coordinated at the national level. For example, TSA officials told us that some security efforts, such as the random worker screening program and roving security response teams, are used to address multiple security needs, such as both passenger and worker screening, but could not identify the extent to which program resources are planned for and applied between competing security needs. TSA officials said that decisions to allocate random worker screening resources between passenger and worker screening are made at the local airport level by FSDs. However, a clear understanding of how TSA’s needs and goals for airport security align with those of its other security responsibilities would enable the agency to better coordinate its programs, gauge the effectiveness of its actions, and allocate resources to its highest-priority needs. Finally, it is not clear to what extent TSA has coordinated airport security activities within the agency, the responsibilities for which are spread among multiple offices. TSA officials explained that agency efforts to enhance and oversee airport perimeter and access control security are spread across multiple programs within five TSA component offices. No one office or program has responsibility for coordinating and integrating actions that affect the numerous aspects of perimeter and access control security, including operations, technology, intelligence, program policy, credentialing, and threat assessments. TSA officials agreed that the diffusion of responsibilities across offices can present coordination challenges. Developing an overarching, integrated framework for coordinating actions between implementing parties could better position TSA to avoid unnecessary duplication, overlap, and conflict in the implementation of these actions. According to our past work, strategies that provide guidance to clarify and link the roles, responsibilities, and capabilities of the implementing parties can foster more effective implementation and accountability. Commercial airports facilitate the movement of millions of passengers and tons of goods each week and are an essential link in the nation’s transportation network. Given TSA’s position that the interconnected commercial airport network is only as strong as its weakest asset, determining vulnerability across this network is fundamental to determining the actions and resources that are necessary to reasonably protect it. Evaluating whether existing, select vulnerability assessments reflect the network of airports will help TSA ensure that its actions strengthen the whole airport system. If TSA finds that additional assessments are needed to identify the extent of vulnerabilities nationwide, then developing a plan with milestones for conducting those assessments, and leveraging existing available assessment information from stakeholders, would help ensure the completion of these assessments and that intended results are achieved. In addition, although the consequences of a successful terrorist breach in airport security have not been assessed, based on the past events, the potential impact on U.S. assets, safety, and public morale could be profound. For this reason, assessing the likely consequences of an attack is an essential step in assessing risks to the nation’s airports. Further, a comprehensive risk assessment that combines threat, vulnerability, and consequence would help TSA determine which risks should be addressed—and to what degree—and would help guide the agency in identifying the necessary resources for addressing these risks. Moreover, documenting milestones for completing the risk assessment would help ensure its timely completion. Implementing and evaluating a pilot program can be challenging, especially given the individual characteristics of the sites involved in the worker screening pilot, such as the variation in airport size, traffic flows, and layouts. However, a well-developed and documented evaluation plan, with well-defined and measurable objectives and standards as well as a clearly articulated methodology and data analysis plan, can help ensure that a pilot program is implemented and evaluated in ways that generate reliable information to inform future program development decisions. By making such a plan a cornerstone of future pilot programs, TSA will be better able to ensure that the results of those pilot programs will produce the reliable data necessary for making the best program and policy decisions. Integrating biometric technology into existing airport access control systems will not be easy given the range of technologies available, the number of stakeholders involved, and potential differences in the biometric controls already in use at airports. Yet Congress, the administration, and the aviation industry have emphasized the need to move forward in implementing such technology to better control access to sensitive airport areas. But until TSA decides whether, when, and how it will mandate biometric access controls at airports, individual airport operators will likely continue to delay investing in potentially costly technology in case it does not comply with future federal standards. Establishing milestones for addressing requirements would not only provide airports with the necessary information to appropriately plan future security upgrades, but give all stakeholders a road map by which they can anticipate future developments. TSA uses security directives as a means for establishing additional security measures in response to general or specific threats against the civil aviation system, including the security of airport perimeters and the controls that limit access to secured airport areas. Just as it is important that federal agencies have flexible mechanisms for responding to the adaptive, dynamic nature of the terrorist threat, it is also important that requirements remain consistent with current threat information. Establishing milestones for periodically reviewing airport perimeter and access control requirements imposed through security directives would help provide TSA and stakeholders with reasonable assurance that TSA’s personnel will review these directives within a time frame authorized by management. TSA, along with industry partners, has taken a variety of steps to implement protective measures to strengthen airport security, and many of these efforts have required numerous stakeholders to implement a range of activities to achieve desired results. These various actions, however, have not been fully integrated and unified toward achieving common outcomes and effectively using resources. A national risk-informed strategy—that establishes measurable goals, priorities, and performance measures; identifies needed resources; and is aligned and integrated with related security efforts—would help guide decision making and hold all public and private security partners accountable for achieving key shared outcomes within available resources. Moreover, a strategy that identifies these key elements would allow TSA to better articulate its needs—and the challenge of meeting those needs—to industry stakeholders and to Congress. Furthermore, balancing estimated costs against expected security benefits, and developing measures to assess the effectiveness of security activities, would help TSA provide reasonable assurance that it is properly allocating and prioritizing its limited resources, or those of airports, in a way that maximizes the effectiveness of its airport security efforts. To help ensure that TSA’s actions in enhancing airport security are guided by a systematic risk management approach that appropriately assesses risk and evaluates alternatives, and that it takes a more strategic role in ensuring that government and stakeholder actions and resources are effectively and efficiently applied across the nationwide network of airports, we recommend that the Assistant Secretary of TSA work with aviation stakeholders to implement the following five actions: Develop a comprehensive risk assessment for airport perimeter and access control security, along with milestones (i.e., time frames) for completing the assessment, that (1) uses existing threat and vulnerability assessment activities, (2) includes consequence analysis, and (3) integrates all three elements of risk—threat, vulnerability, and consequence. As part of this effort, evaluate whether the current approach to conducting JVAs appropriately and reasonably assesses systems vulnerabilities, and whether an assessment of security vulnerabilities at airports nationwide should be conducted. If the evaluation demonstrates that a nationwide assessment should be conducted, develop a plan that includes milestones for completing the nationwide assessment. As part of this effort, leverage existing assessment information from industry stakeholders, to the extent feasible and appropriate, to inform its assessment. Ensure that future airport security pilot program evaluation and implementation efforts include a well-developed and well-documented evaluation plan that includes criteria or standards for determining program performance, a clearly articulated methodology, a detailed data collection plan, and a detailed data analysis plan. Develop milestones for meeting statutory requirements, in consultation with appropriate aviation industry stakeholders, for establishing system requirements and performance standards for the use of biometric airport access control systems. Develop milestones for establishing agency procedures for reviewing airport perimeter and access control requirements imposed through security directives. To better ensure a unified approach among airport security stakeholders for developing, implementing, and assessing actions for securing airport perimeters and access to controlled areas, develop a national strategy for airport security that incorporates key characteristics of effective security strategies, including the following: Measurable goals, priorities, and performance measures. TSA should also consider using information from other methods, such as covert testing and proxy measures, to gauge progress toward achieving goals. Program cost information and the sources and types of resources needed. TSA should also identify where those resources would be most effectively applied by exploring ways to develop and implement cost-benefit analysis to identify the most cost-effective alternatives for reducing risk. Plans for coordinating activities among stakeholders, integrating airport security goals and activities with those of other aviation security priorities, and implementing security activities within the agency. We provided a draft of our report to DHS and TSA on August 3, 2009, for review and comment. On September 24, 2009, DHS provided written comments, which are reprinted in appendix VIII. In commenting on our report, DHS stated that it concurred with all five recommendations and identified actions planned or under way to implement them. In its comments to our draft report, DHS stated that the Highlights page of our report includes a statement that is inaccurate. We disagree. Specifically, DHS contends that it is not accurate to state that TSA “has not conducted vulnerability assessments for 87 percent of the nation’s 450 commercial airports” because this statement does not recognize that TSA uses other activities to assess airport vulnerabilities, and that these activities are conducted for every commercial airport. For example, DHS stated that (1) every commercial airport must have a TSA-approved ASP, which is to cover personnel, physical, and operational security measures; (2) each ASP is reviewed on a regular basis by a FSD; and (3) such FSD reviews “include a review of security measures applied at the perimeter.” As we noted in our report, TSA identified JVAs, along with professional judgment, as the agency’s primary mechanism for assessing airport security vulnerabilities in accordance with NIPP requirements. Moreover, it is not clear to what extent the FSD reviews and other activities TSA cites in its comments address airport perimeter and access control vulnerabilities or to what extent such reviews have been applied consistently on a nationwide basis, since TSA has not provided us with any documentary evidence regarding these or other reviews. Finally, in meeting with TSA, its officials acknowledged that because they have not conducted a joint vulnerability assessment for 87 percent of commercial airports, they do not know how vulnerable these airports are to an intentional breach in security or an attack. Thus, we consider the statement on our Highlights page to be accurate. TSA also stated that “as provided in our draft report” the foundation of TSA’s national strategy is its individual layers—or actions—of security, which, when combined, generate an exponential increase in deterrence and detection capability. However, we did not evaluate TSA’s layered approach to security or the extent to which this approach provides increased deterrence and detection capabilities. Regarding our first recommendation that TSA develop a comprehensive risk assessment for airport perimeter and access control security, DHS stated that TSA will develop such an assessment through its ongoing efforts to conduct a comprehensive risk assessment for the transportation sector. TSA intends to provide the results of the assessment to Congress by January 2010. According to DHS, the aviation domain portion of the sector risk assessment is to address, at the national level, nine airport perimeter and access control security scenarios. It also stated that the assessment is to integrate all three elements of risk—threat, vulnerability and consequence—and will rely on existing assessment activities, including JVAs. In developing this assessment, it will be important that TSA evaluate whether its current approach to conducting JVAs, which it identifies as one element of its risk assessment efforts, appropriately assesses vulnerabilities across the commercial airport system, and whether additional steps are needed. Since TSA has repeatedly stated the need to develop baseline data on airport security vulnerabilities to enable it to conduct systematic analysis of vulnerabilities on a nationwide basis, TSA could also benefit from exploring the feasibility of leveraging existing assessment information from industry stakeholders to inform this assessment. DHS also agreed with our second recommendation that a well-developed and well-documented evaluation plan should be part of TSA’s efforts to evaluate and implement future airport security pilot programs. In addition, DHS concurred with our third recommendation that TSA develop milestones for meeting statutory requirements for establishing system requirements and performance standards for the use of biometric airport access control systems. DHS noted that while mandatory use of such systems is not required by statute, TSA is still considering whether it will mandate the use of biometric access control systems at airports, and in the meantime it will continue to encourage airport operators to voluntarily utilize biometrics in their access control systems. We agree that mandatory use of biometric access control systems is not required by statute, but establishing milestones would help guide TSA’s continued work with the airport industry to develop and refine existing biometric access control standards. In regard to our fourth recommendation that TSA develop milestones for establishing agency procedures for reviewing airport security requirements imposed through security directives, DHS concurred that milestones are necessary. Finally, in regard to our fifth recommendation that TSA develop a national strategy for airport security that incorporates key characteristics of effective security strategies, DHS concurred and stated that TSA will develop a national strategy by updating the TS-SSP. DHS stated that TSA intends to solicit input on the plan from its Sector Coordinating Council, which represents key private sector stakeholders from the transportation sector, before releasing the updated TS-SSP in the summer of 2010. However, given that the TS-SSP is to focus on detailing how the NIPP framework will apply to the entire transportation sector, it may not be the most appropriate vehicle for developing a national strategy that addresses the various management issues specific to airport security that we identified in our report. A more effective approach might be to issue the strategy as a stand-alone plan, in keeping with the format TSA has used for its air cargo, passenger checkpoint screening, and SPOT strategies. A stand-alone strategy might better facilitate key stakeholder involvement, focus attention on airport security needs, and allow TSA to more thoroughly address relevant challenges and goals. But irrespective of the format, it will be important that TSA fully address the key characteristics of an effective strategy, as identified in our report. The intent of a national strategy is to provide a unifying framework that guides and integrates stakeholder activities toward desired results, which may be best achieved when planned efforts are clear and sustainable, and transparent enough to ensure accountability. Thus, it is important that the strategy fully incorporate the following characteristics: (1) measurable goals, priorities, and performance measures; (2) program cost information, including the sources and types of resources needed; and (3) plans for coordinating activities among stakeholders, integrating airport security goals and activities with those of other aviation security priorities, and implementing security activities within the agency. TSA also provided us with technical comments, which we considered and incorporated in the report where appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Secretary of Transportation, the Assistant Secretary of the Transportation Security Administration, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report or wish to discuss these matters further, please contact me at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. This report evaluates to what extent the Transportation Security Administration (TSA) has assessed the risk to airport security consistent with the National Infrastructure Protection Plan’s (NIPP) risk management framework; implemented protective programs to strengthen airport security, and evaluated its worker screening pilot program; and established a national strategy to guide airport security decision making. To evaluate the extent to which TSA has assessed risks for airport perimeter and access control security efforts, we relied on TSA to identify risk assessment activities for these areas, and we then examined documentation for these activities, such as TSA’s 2008 Civil Aviation Threat Assessment, and interviewed TSA officials responsible for conducting assessment efforts. We examined the extent to which TSA generally conducted activities intended to assess threats, vulnerabilities, and consequences to the nation’s approximately 450 airports. We also reviewed the extent to which TSA’s use of these three types of assessments met the NIPP criteria for completing a comprehensive risk assessment. However, while we assessed the extent to which the individual threat and vulnerability assessment activities that TSA identified addressed the area of airport perimeter and access controls, the scope of our work did not include individual evaluations of these activities to determine whether they were consistent with the NIPP criteria for conducting threat and vulnerability assessments. In addition, we reviewed and summarized critical infrastructure and aviation security requirements set out by Homeland Security Presidential Directives 7 and 16, the Aviation and Transportation Security Act (ATSA), and other statutes and related materials. We also examined the individual threat and vulnerability assessment activities and discussed them with senior TSA and program officials, to evaluate how TSA uses this information to set goals and inform its decision making. We compared this information with the NIPP, TSA’s Transportation Security Sector-Specific Plan, and our past guidance and reports on recommended risk management practices. In addition, we obtained and analyzed data from TSA regarding joint vulnerability assessments, which are conducted with the Federal Bureau of Investigation (FBI), to determine the extent to which TSA has used this information to assess risk to airport perimeter and access control security. We also obtained information on the processes used to schedule and track these activities to determine the reliability with which these data were collected and managed, and we determined that the data were sufficiently reliable for the purposes of this report. We interviewed TSA and FBI officials responsible for conducting joint vulnerability assessments to discuss the number conducted by TSA since 2004, the scope of these assessments, and how they are conducted. In addition, we interviewed selected TSA officials responsible for risk management and security programs related to airport perimeter and access control to clarify the extent to which TSA has assessed risk in these areas. We selected these officials based upon their relevant expertise with TSA’s risk management efforts and its airport perimeter and access control efforts. We also analyzed TSA data on security breaches by calculating the total number of security breaches from fiscal years 2004 through 2008. To determine that the data were sufficiently reliable to present contextual information regarding all breaches to secured areas (including airport perimeters) in this report, we obtained information on the processes used to collect, tabulate, and assess these data, and discussed data quality control procedures with appropriate officials and found that the data were sufficiently reliable for this purpose. Because the data include security breaches that occurred within any type of secured areas, including passenger-related breaches, they are not specific to perimeter and access control security. In addition, the data have not been adjusted to reflect potential issues that could also influence or skew the number of overall breaches, such as annual increases in the number of passengers or specific incidences occurring within individual airports that account for more breaches than others. Furthermore, because TSA does not require its inspectors to enter a description of the breach when documenting an incident, and general reports on breach data do not show much variation between incidences unless a report includes a description of the breach, we did not ask TSA for descriptive information on breaches that occurred. To evaluate the extent to which TSA has implemented protective programs to strengthen airport security consistent with the NIPP risk management framework, we asked TSA to identify agency-led activities and programs for strengthening airport security. For the purposes of this report, we categorized TSA’s responses into four main areas of effort: (1) worker screening pilot program, (2) worker security programs, (3) technology, and (4) general airport security. To determine the extent to which TSA evaluated its worker screening pilot program, we analyzed TSA’s final report on it worker screening pilot program, including conclusions and limitations cited by the contractor—the Homeland Security Institute (HSI)—TSA hired to assist with the pilot’s design, implementation, and evaluation. We also reviewed standards for internal control in the federal government and our previous work on pilot program development and evaluation to identify accepted practices for ensuring reliable results, including key features of a sound evaluation plan. Further, we analyzed TSA and HSI’s documentation of the worker screening pilot program methodology to determine whether TSA and HSI had documented their plans for conducting the program, whether each pilot was carried out in a consistent manner, and if participating airports were provided with written requirements or guidance for conducting the pilots. To evaluate TSA’s efforts for its worker security programs, we assessed and summarized relevant program information, operations directives, and standard operating procedures for the Aviation Direct Access Screening Program (ADASP) and enhanced background checks. We also informed this assessment with recent work by the Department of Homeland Security’s (DHS) Office of the Inspector General (OIG) regarding worker screening. We reviewed the DHS OIG’s methodology and analysis to determine whether its findings were reliable for use in our report. We analyzed TSA’s documentation of its background checks to determine if TSA sufficiently addressed relevant ATSA requirements and recommendations from our 2004 report on airport security. We also interviewed TSA officials responsible for worker background checks to determine the agency’s efforts to develop a plan to meet outstanding ATSA requirements. With respect to perimeter and access control technology, we reviewed and summarized TSA documentation and evaluations of the Airport Access Control Pilot Program (AACPP), documentation related to the Airport Perimeter Security (APS) pilot program, and the dissemination of information regarding technology to airports. We interviewed officials with the DHS Directorate for Science and Technology, the National Safe Skies Alliance, and RTCA, Inc., regarding research, development, and testing efforts, and challenges and potential limitations of applicable technologies to airport perimeter and access control security. We selected these entities because of their role in the development of such technology. We also interviewed TSA Headquarters officials to obtain views on the nature and scope of technology-related efforts and other relevant considerations, such as how they addressed relevant ATSA requirements and recommendations from our 2004 report, or how they plan to do so. With regard to TSA’s efforts for general airport security, we examined TSA’s procedures for developing and issuing airport perimeter and access control requirements through security directives and other methods, and analyzed the extent to which TSA disseminated security requirements to airports through security directives. At our request, TSA identified 25 security directives and emergency amendments that imposed requirements related to airport perimeter and access control security, which we examined to identify specific areas of regulation. In addition, we assessed and summarized relevant program information and documentation, such as operations directives, for other programs identified by TSA, such as the Visible Intermodal Prevention and Response (VIPR) program, Screening of Passengers by Observation Techniques (SPOT) program, and the Law Enforcement Officer Reimbursement Program. To evaluate the extent to which TSA established a national strategy to guide airport security decision making, we considered guidance on effective characteristics for security strategies and planning that we previously reported, Government Performance and Results Act (GPRA) requirements, and generally accepted strategic planning practices for government agencies. In order to evaluate TSA’s approach to airport security, we reviewed TSA documents to identify major security goals and subordinate objectives for airport perimeter and access control security, and relevant priorities, goals, objectives, and performance measures. We also analyzed relevant program documentation, including budget, cost, and performance information, including relevant information TSA developed and maintains for the Office of Management and Budget’s Performance Assessment Rating Tool. We compared TSA’s approach with criteria identified in NIPP, other DHS guidance, GPRA, and other leading practices in strategies and planning. We also interviewed relevant TSA program and budget officials, Federal Aviation Administration (FAA) officials, and selected aviation industry officials regarding the cost of airport perimeter and access control security for fiscal years 2004 through 2008. To determine the extent to which TSA collaborated with stakeholders on airport security activities, and to obtain their insights on airport security operations, costs, and regulation, we interviewed industry officials from the Airports Council International-North America—whose commercial airport members represent 95 percent of domestic airline passenger and air cargo traffic in North America—and from the American Association of Airport Executives—whose members represent 850 domestic airports. We selected these industry associations based on input from TSA and from industry stakeholders, who identified the two associations representing commercial airport operators. We also attended aviation association conferences at which industry officials presented information on national aviation security policy and operations, and we conducted a group discussion with 17 officials representing various airport and aircraft operators and aviation associations to obtain their views regarding key issues affecting airport security. While the views expressed by these industry, airport, and aircraft operator officials cannot be generalized to all airport industry associations and operators, these interviews provided us with additional perspectives on airport security and an understanding of the extent to which TSA has worked and collaborated with airport stakeholders. We also conducted site visits at nine U.S. commercial airports—Orange County John Wayne Airport, Washington-Dulles International Airport, Miami International Airport, Orlando International Airport, John F. Kennedy International Airport, Westchester County Airport, Logan International Airport, Barnstable Municipal Airport, and Salisbury/Wicomico County Regional Airport. During these visits we observed airport security operations and discussed issues related to perimeter and access control security with airport officials and on-site TSA officials, including federal security directors (FSD). We selected these airports based on several factors, including airport category, size, and geographical dispersion; whether they faced problems with perimeter and access control security; and the types of technological initiatives tested or implemented. Because we selected a nonprobability sample of airports to visit, those results cannot be generalized to other U.S. commercial airports; however, the information gathered provides insight into TSA and airport programs and procedures. In addition, at Miami International Airport and John F. Kennedy International Airport we conducted separate interviews with airport officials to discuss their ongoing, or anticipated, efforts to implement additional worker screening methods at their respective airports. We also conducted telephone interviews with airport officials and FSDs from four airports that had implemented, or planned to implement, various forms of 100 percent screening of airport workers to discuss their efforts. These were Cincinnati/Northern Kentucky International Airport, Dallas/Fort Worth International Airport, Denver International Airport, and Phoenix Sky Harbor International Airport. While the views of the officials we spoke with regarding additional worker screening methods cannot be generalized to all airport security officials, they provided insight into how airport security programs were chosen and developed. We also conducted an additional site visit at Logan International Airport to observe TSA’s implementation of various worker screening methods as part of the agency’s worker screening pilot program. While the experiences of this pilot location cannot be generalized to all airports participating in the pilot, we chose this airport based on airport category and the variety of worker screening methods piloted at this location. We conducted this performance audit from May 2007 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. TSA has taken steps since 2004 to address some of the requirements related to airport perimeter and access control security prescribed by ATSA. The related ATSA requirements, and TSA’s actions as of May 2009 to address these requirements, are summarized in table 3. TSA officials told us that they use the results of compliance inspections and covert testing to augment their assessment of potential vulnerabilities in airport security. Compliance inspections examine a regulated entity’s— such as an airport operator or air carrier—adherence to federal regulations, which TSA officials say they use to determine if airports adequately address known threats and vulnerabilities. According to TSA, while regulatory compliance is just one dimension of airport security, compliance with federal requirements allows TSA to determine the general level of security within an airport. As a result, according to TSA, compliance with regulations suggests less vulnerability within an airport and, conversely, failure to meet critical compliance rates suggests the likelihood of a larger problem within an airport and helps the agency identify and assess vulnerabilities. TSA allows its inspectors to conduct compliance inspections based on observations of various activities, such as ADASP, VIPR, and local covert testing, and to conduct additional inspections based on vulnerabilities identified through assessments or the results of regular inspections. Covert tests are any test of security systems, personnel, equipment, and procedures to obtain a snapshot of the effectiveness of that security measure, and they are used to improve airport performance, safety, and security. TSA officials stated that covert testing assists the agency in identifying airport vulnerabilities because such tests are designed based on threat assessments and intelligence to approximate techniques that terrorists may use to exploit gaps in airport security. TSA conducts four types of covert tests for airport access controls: Access to security identification display areas (SIDA): TSA inspectors not wearing appropriate identification attempt to penetrate SIDA access points, such as boarding gates, employee doors, and other entrances. Access to air operations areas (AOA): TSA inspectors not wearing appropriate identification attempt to penetrate AOA via access points from public areas, such as perimeter gates and cargo areas. Access to aircraft: TSA inspectors not wearing appropriate identification (or not carrying valid boarding passes) attempt to penetrate passenger access points that lead to aircraft from sterile areas, such as boarding gates, employee doors, and jet ways. SIDA challenges: Once inside a SIDA, TSA inspectors attempt to walk around these areas, such as the tarmac and baggage loading areas, without displaying appropriate identification. TSA also requires FSDs to conduct similar, locally controlled tests of access controls to ensure compliance and identify possible vulnerabilities with airport security. These tests are selected by the FSDs and based on locally identified risks and can include challenging procedures in the secure area, piggybacking (following authorized airport workers into secured areas), and attempting to access an aircraft from sterile area. According to TSA officials, the agency uses the results of its covert tests to inform decision making for airport security, but officials could not provide examples of how this information has specifically informed past decisions. Various TSA offices and programs contribute to the overall operations and costs of airport perimeter and access control security. According to TSA officials, the agency does not develop a cost estimate specific to perimeter and access control security because such efforts are often part of broader security activities or related programs—for example, VIPR and SPOT are also used for passenger screening. As a result, it is difficult to identify what percentage of program costs has been expended on airport perimeter and access control security activities. At our request, TSA officials identified the estimated spending related to perimeter and access control security programs from fiscal years 2004 through 2008 (see table 4). Airports can receive funding for purposes related to perimeter and access control security via grants awarded through FAA’s Airport Improvement Program. TSA officials also told us that the agency generally does not collect or track cost information for airport security efforts funded through the Airport Improvement Program. This program is one of the principal sources of funding for airport capital improvements in the United States, providing approximately $3 billion in grants annually to enhance airport capacity, safety, and environmental protection, as well as perimeter security. According to FAA officials, many factors are considered when awarding grants to airports for perimeter security enhancements, although security projects required by statute or regulation receive the highest priority. Projects that receive funding have included computerized access controls for ramps, infrastructure improvements to house central computers, surveillance systems, and perimeter fencing. According to FAA, more than $365 million in airport perimeter and access control–related grants were provided through the Airport Improvement Program for fiscal years 2004 through 2008. TSA officials also told us that the agency does not track funds spent by individual airport operators to enhance or maintain perimeter and access control security. In 2009 the Airports Council International-North America—an aviation industry association—surveyed commercial airports regarding the funding needed for airport capital projects from 2009 to 2013. As part of this effort, the association surveyed airport operators on the amount of funds they planned to expend on airport security as a percentage of their overall budgets. The association reported that planned airport operator spending on airport security, as a percentage of total spending, ranged from 3.8 percent (about $2 billion) for large hub airports to 3.9 percent (about $230 million) for small hub airports. The association surveys did not include information on the types of security projects undertaken by airports. However, during our site visits we obtained data from selected airport operators on the costs of perimeter and access control security projects they had recently concluded or estimated costs for projects in progress. Examples of airport spending on perimeter and access control security include $30 million to install a full biometric access system; $6.5 million to install an over 8,000-foot-long blast/crash resistant wall along the airport perimeter; $8 million to install over 680 bollards in front of passenger terminals and vehicle access points; and $3 million to develop and install an infrared intrusion detection system. From May through July 2008 TSA implemented worker screening pilots at seven airports in accordance with the Explanatory Statement accompanying the DHS Appropriations Act, 2008 (see table 5 for a summary of text directing the worker screening pilot program). At three airports, TSA conducted 100 percent worker screening—inspections of all airport workers and vehicles entering secure areas; at four others TSA randomly screened 20 percent of workers and tested other enhanced security measures. Screening of airport workers was to be done at either the airport perimeter or the passenger screening checkpoints. TSA was directed to collect data on the methods it utilized, and evaluate the benefits, costs, and impacts of 100 percent worker screening to determine the most effective and cost-efficient method of addressing and deterring potential security risks posed by airport workers. The enhanced measures that TSA tested at the four airports not implementing 100 percent screening are summarized below: Employee training: TSA provided a security awareness training video, which all SIDA badgeholders were required to complete. According to TSA, the training intended reduce security breaches by increasing workers’ understanding of their security responsibilities and awareness of threats and abnormal behaviors. Behavioral recognition training: TSA provided funding to participating airports to teach select law enforcement officers and airport personnel to identify potentially high-risk individuals based on their behavior. A condensed version of the SPOT course, this training was intended to equip personnel with skills to enhance existing duties, according to TSA officials. Targeted physical inspections: TSA conducted random inspections of vehicles and individuals entering the secured areas of airports to increase the coverage of ADASP. Inspections consisted of bag, vehicle, and identification checks; scanning bottled liquids; and random security sweeps of specific airport areas. Deployment of technology: TSA employed additional technology at selected airports to assist with the screening of employees, such as walk- through and handheld metal detectors, bottled liquid scanners, and explosive detection systems. TSA also tested biometric access control systems at selected airports. According to TSA, VIPR operations augment existing airport security activities, such as ADASP, and provide a visual deterrent to terrorist or other criminal activity. VIPR was first implemented in 2005, and according to TSA officials, VIPR operations are deployed through a risk-based approach and in response to specific intelligence information or known threats. In a VIPR operation, TSA officials, including transportation security officers and inspectors, behavioral detection officers, bomb appraisal officers, and federal air marshals work with local law enforcement and airport officials to temporarily enhance aviation security. According to TSA officials, VIPR operations for perimeter and access control security can include random inspections of individuals, property, and vehicles, as well as patrols of secured areas and random checks to ensure that employees have the proper credentials. TSA officials told us that although they do not know how many VIPR deployments have specifically addressed airport perimeter and access control security, from March 2008 through April 2009 TSA performed 1,042 commercial and general aviation airport or cargo VIPR operations. According to TSA officials, the majority of these operations involved the observation and patrolling of secured airport areas and airport perimeters. As of May 2009 TSA officials also said that the agency is in the process of enhancing its VIPR database to more accurately capture and track specific operational objectives, such as enhancing the security of airport perimeters and access controls, and developing an estimated time frame for completing this effort. Since 2004 TSA has used SPOT—a passenger screening program in which behavior detection officers observe and analyze passenger behavior to identify potentially high-risk individuals—to determine if an individual or individuals may pose a risk to aircraft or airports. Although SPOT was originally designed for passenger screening, TSA officials stated that FSDs can also use behavior detection officers to assess worker behavior as they pass through the passenger checkpoint, as part of random worker screening operations or as part of VIPR teams deployed at an airport. However, TSA officials could not determine how often behavior detection officers have participated in random worker screening or VIPR operations, or identify which airports have used behavior detection officers for random worker screening. According to TSA officials, the agency is in the process of redesigning its data collection efforts and anticipates that it will be able to more accurately track this information in the future, though officials did not provide a time frame for doing so. TSA officials also told us that when participating in random worker screening, behavior detection officers observe workers for suspicious behavior as they are being screened and may engage workers in casual conversation to assess potential threats. According to TSA officials, the agency has provided behavior detection training to law enforcement personnel as part of its worker screening pilot program, as well as to selected airport security and operations personnel at more than 20 airports. We currently have ongoing work assessing SPOT, and will issue a report on this program at a later date. TSA undertakes efforts to facilitate the deployment of law enforcement personnel authorized to carry firearms at airport security checkpoints, and in April 2002, the Law Enforcement Officer Reimbursement Program was established to provide partial reimbursement for enhanced, on-site law enforcement presence in support of the passenger screening checkpoints. Since 2004, the program has expanded to include law enforcement support along the perimeter and to assist with worker screening. According to TSA, the program is implemented through a cooperative agreement process that emphasizes the ability of both parties to identify and agree as to how law enforcement officers will support the specific security requirements at an airport. For example, the FSD, in consultation with the airport operator and local law enforcement, may determine that rather than implementing fixed-post stationing of law enforcement officers, it may be more appropriate to implement flexible stationing of law enforcement officers. TSA may also provide training or briefings on an as- needed basis on relevant security topics, including improvised explosive device recognition, federal criminal statutes pertinent to aviation security, and procedures and processes for armed law enforcement officers. Awards made under the reimbursement program are subject to the availability of appropriated funds, among other things, and are to supplement not supplant state and local funding. According to TSA officials, however, no applicant has been denied funds based on lack of appropriated funds. Program evaluation methods exist whereby TSA could attempt to assess whether its activities are meeting intended objectives. These methods center on reducing the risk of both external and internal threats to the security of airport perimeters and access controls, and seek to use information and resources available to help capture pertinent information. First, recognizing that there are challenges associated with measuring the effectiveness of deterrence-related activities, the NIPP’s Risk Management Framework provides mechanisms for qualitative feedback that although not considered a metric, could be applied to augment and improve the effectiveness and efficiency of protective programs and activities. For example, working with stakeholders—such as airport operators and other security partners—to identify and share lessons learned and best practices across airports could assist TSA in better tailoring its efforts and resources and continuously improving security. Identifying a range of qualitative program information—such as information gathered through vulnerability assessment activities or compliance inspections—could also allow TSA to determine whether activities are effective. As discussed in appendix III, compliance inspections and covert tests could be used to identify noncompliance with regulations or security breaches within designated secured areas. For example, TSA could use covert tests to determine if transportation security officers are following TSA procedures when screening airport workers or whether certain worker screening procedures detect prohibited items. However, in order to improve the usefulness of this technique, we previously recommended to TSA that the agency develop a systematic process for gathering and analyzing specific causes of all covert testing failures, record information on processes that may not be working properly during covert tests, and identify effective practices used at airports that perform well on covert tests. Second, as TSA has already begun to do with some activities, it could use data it already collects to identify trends and establish baseline data for a future comparison of effectiveness. For example, a cross-sectional analysis of the number of workers caught possessing prohibited items at specific worker screening locations over time, while controlling for variables such as increased law enforcement presence or airport size, could provide insights into what type of security activities help to reduce the possession of prohibited items. Similarly, an examination of airport workers apprehended, fired, or referred to law enforcement while on the job could provide insights into the quality of worker background checks and security threat assessments. Essentially, the these types of analyses provide a useful context for drawing conclusions about whether certain security practices are reasonable and appropriate given certain conditions and, gradually, with the accumulation of relevant data, should allow TSA to start identifying cause-and-effect relationships. Third, according to the Office of Management and Budget (OMB), the use of proxy measures may also allow TSA to determine how well its activities are functioning. Proxy measures are indirect measures or indicators that approximate or represent the direct measure. TSA could use proxy measures to address deterrence, other security goals as identified above, or a combination of both. According to OMB, proxy measures are to be correlated to an improved security outcome, and the program should be able to demonstrate—for example, through the use of modeling—how the proxies tie to the eventual outcome. The Department of Transportation has also highlighted the need for proxy measures when assessing maritime security efforts pertaining to deterrence. For example, according to the Department of Transportation, while a direct measure of access to seaports might be the number of unauthorized intruders detected, proxy measures for seaport access may include related information on gates and guards—combined with crime statistics relating to unauthorized entry in the area of the port—to support a broader view of port security. In terms of aviation security, because failure to prevent a worker from placing a bomb on a plane could be catastrophic, proxy measures may include information on access controls, worker background checks, and confiscated items. Proxy measures could also include information on aircraft operators’ efforts to secure the aircraft. In using a variety of proxy measures, failure in any one of the identified measures could provide an indication on the overall risk to security. Lastly, the use of likelihood, or “what-if scenarios,” which are used to describe a series of steps leading to an outcome, could allow TSA to assess whether potential activities and efforts effectively work together to hypothetically achieve a positive outcome. For example, the development of such scenarios could help TSA to consider whether an activity’s procedures could be modified in response to identified or projected changes in terrorist behaviors, or if an activity’s ability to reduce or combat a threat is greater if used in combination with other activities. In addition to the contact named above, Steve Morris, Assistant Director, and Barbara Guffy, Analyst-in-Charge, managed this assignment. Scott Behen, Valerie Colaiaco, Dorian Dunbar, Christopher Keisling, Matthew Lee, Sara Margraf, Spencer Tacktill, Fatema Wachob, and Sally Williamson made significant contributions to the work. Chuck Bausell, Jr. provided expertise on risk management and cost-benefit analysis. Virginia Chanley and Michele Fejfar assisted with design, methodology, and data analysis. Thomas Lombardi provided legal support; Elizabeth Curda and Anne Inserra provided expertise on performance measurement; and Pille Anvelt developed the report’s graphics.
Incidents of airport workers using access privileges to smuggle weapons through secured airport areas and onto planes have heightened concerns regarding commercial airport security. The Transportation Security Administration (TSA), along with airports, is responsible for security at TSA-regulated airports. To guide risk assessment and protection of critical infrastructure, including airports, the Department of Homeland Security (DHS) developed the National Infrastructure Protection Plan (NIPP). GAO was asked to examine the extent to which, for airport perimeters and access controls, TSA (1) assessed risk consistent with the NIPP; (2) implemented protective programs, and evaluated its worker screening pilots; and (3) established a strategy to guide decision making. GAO examined TSA documents related to risk assessment activities, airport security programs, and worker screening pilots; visited nine airports of varying size; and interviewed TSA, airport, and association officials. Although TSA has implemented activities to assess risks to airport perimeters and access controls, such as a commercial aviation threat assessment, it has not conducted vulnerability assessments for 87 percent of the nation's approximately 450 commercial airports or any consequence assessments. As a result, TSA has not completed a comprehensive risk assessment combining threat, vulnerability, and consequence assessments as required by the NIPP. While TSA officials said they intend to conduct a consequence assessment and additional vulnerability assessments, TSA could not provide further details, such as milestones for their completion. Conducting a comprehensive risk assessment and establishing milestones for its completion would provide additional assurance that intended actions will be implemented, provide critical information to enhance TSA's understanding of risks to airports, and help ensure resources are allocated to the highest security priorities. Since 2004, TSA has taken steps to strengthen airport security and implement new programs; however, while TSA conducted a pilot program to test worker screening methods, clear conclusions could not be drawn because of significant design limitations and TSA did not document key aspects of the pilot. TSA has taken steps to enhance airport security by, among other things, expanding its requirements for conducting worker background checks and implementing a worker screening program. In fiscal year 2008 TSA pilot tested various methods to screen airport workers to compare the benefits, costs, and impacts of 100 percent worker screening and random worker screening. TSA designed and implemented the pilot in coordination with the Homeland Security Institute (HSI), a federally funded research and development center. However, because of significant limitations in the design and evaluation of the pilot, such as the limited number of participating airports--7 out of about 450--it is unclear which method is more cost-effective. TSA and HSI also did not document key aspects of the pilot's design, methodology, and evaluation, such as a data analysis plan, limiting the usefulness of these efforts. A well-developed and well-documented evaluation plan can help ensure that pilots generate needed performance information to make effective decisions. While TSA has completed these pilots, developing an evaluation plan for future pilots could help ensure that they are designed and implemented to provide management and Congress with necessary information for decision making. TSA's efforts to enhance the security of the nation's airports have not been guided by a unifying national strategy that identifies key elements, such as goals, priorities, performance measures, and required resources. For example, while TSA's various airport security efforts are implemented by federal and local airport officials, TSA officials said that they have not identified or estimated costs to airport operators for implementing security requirements. GAO has found that national strategies that identify these key elements strengthen decision making and accountability; in addition, developing a strategy with these elements could help ensure that TSA prioritizes its activities and uses resources efficiently to achieve intended outcomes.
IRS is responsible for administering our nation’s voluntary tax system in a fair and equitable manner. To do so, IRS has roughly 100,000 employees, many of whom interact directly with taxpayers. In fiscal year 1994, IRS processed over 200 million tax returns, issued about 86 million tax refunds, handled about 39 million calls for tax assistance, conducted about 1.4 million tax audits, and issued about 19 million collection notices for delinquent taxes. These activities resulted in millions of telephone and personal contacts with taxpayers. Many of these interactions have the potential to make taxpayers feel as if they have been mistreated or abused by IRS employees with whom they have dealt or by the “tax system” in general. IRS has several offices that are involved in handling taxpayers’ concerns about how they have been treated, including those alleging taxpayer abuse, which are not resolved through normal daily operations. IRS’ Inspection Service (Inspection), which includes the Internal Audit and Internal Security Divisions, is to investigate taxpayer allegations involving potential criminal misconduct by IRS employees. Problem Resolution Offices in IRS’ district offices and service centers are to help taxpayers who have been unable to resolve their problems through normal IRS channels with other IRS staff. IRS’ Office of Legislative Affairs is to track responses to congressional inquiries, often on behalf of constituents, as well as direct correspondence with the Commissioner or other IRS executives involving the tax system or IRS’ administration of it. OIG and DOJ may also get involved with taxpayer abuse allegations. OIG may investigate allegations involving senior IRS officials, those who serve in General Schedule (GS) grade-15 positions or higher, as well as IRS Inspection employees. IRS employees accused of criminal misconduct may be prosecuted by a DOJ U. S. Attorney. IRS employees who are sued by taxpayers for actions taken within the employees’ official duties may be defended by attorneys with the DOJ Tax Division. In our 1994 report on IRS’ controls to protect against taxpayer abuse, we were unable to determine the overall adequacy of IRS’ controls and made several recommendations to improve them. Foremost among our recommendations was that IRS define taxpayer abuse and collect relevant management information to systematically track its nature and extent. At that time, in the absence of an IRS definition, we defined taxpayer abuse to include instances when (1) an IRS employee violated a law, regulation, or the IRS Rules of Conduct; (2) an IRS employee was unnecessarily aggressive in applying discretionary enforcement power; or (3) IRS’ information systems broke down, e.g. when taxpayers repeatedly received tax deficiency notices and payment demands despite continual contacts with IRS to resolve problems with their accounts. Other recommendations in our 1994 report addressed such concerns as unauthorized access to computerized taxpayer information, improper use and processing of taxpayer cash payments, and the need for IRS notification of potential employee liability for trust fund recovery penalties. IRS did not agree with the need to define taxpayer abuse—a term it found objectionable—nor to track its nature and extent; but IRS agreed to take corrective action on many of our other recommendations. To determine the adequacy of IRS’ current controls over taxpayer abuse, we identified and documented actions taken by IRS in response to the recommendations in our 1994 report. We also identified any additional actions that IRS has initiated since then, relative to how IRS treats taxpayers. Finally, we discussed with IRS officials a recent commitment they made to define and establish a taxpayer complaints tracking system and the current status of this effort. To determine the extent of information available concerning the number and outcomes of abuse allegations received and investigated by IRS, OIG, and DOJ, we interviewed officials from the respective organizations and reviewed documentation relative to their information systems. We were told that the information systems maintained by these organizations do not include specific data elements for alleged taxpayer abuse. However, these officials said they believed that examples of alleged taxpayer abuse may be found within other general data categories in five IRS systems, two DOJ systems, and an OIG system. For example, IRS officials indicated that alleged taxpayer abuse might be found in a system used to track disciplinary actions against employees. This information is captured under the general data categories of “taxpayer charge or complaint” and “misuse of position or authority.” Similar examples were provided by officials from each organization as described in appendix II. We discussed the general objectives and uses of the relevant information systems with officials from the respective agencies. We also reviewed examples of the data produced by these systems under the suggested general data categories to ascertain if it was possible from these examples to determine whether taxpayer abuse may have occurred. We did not attempt to verify the accuracy of the data we received, because to do so would require an extensive, time-consuming review of related case files. This was beyond the scope and time available for this study. To determine OIG’s role in investigating allegations of taxpayer abuse, we obtained and reviewed Treasury orders and directives establishing and delineating the responsibilities of OIG, as well as a 1994 Memorandum of Understanding between OIG and IRS outlining specific procedures to be followed by each staff for reporting and investigating allegations of misconduct and fraud, waste, and abuse. We also obtained statistics from OIG staff concerning the number of allegations they received and investigations they conducted involving IRS employees for fiscal year 1995—the latest year for which data were available. In addition, we discussed OIG’s role and the relationship between OIG and IRS staffs with senior officials from both OIG and IRS. We requested comments on a draft of this report from the Commissioner of Internal Revenue, the Treasury Inspector General, and the Attorney General. On August 9, 1996, we received written comments from IRS, which are summarized on page 15 and are reprinted in appendix III. We also received written comments, which were technical in nature, from both the Treasury’s OIG and DOJ. These comments have been incorporated in the report where appropriate. We performed our audit work in Washington, D.C., between April and July 1996 in accordance with generally accepted government auditing standards. While IRS has made improvements in its controls over the treatment of taxpayers since our 1994 report, we are still unable to reach a conclusion at this time on the overall adequacy of IRS’ controls. We cannot determine the adequacy of these controls because IRS officials have not yet established a capability to capture management information, which is needed to ensure that abuse is identified and addressed and to prevent its recurrence. We are, however, encouraged by a recent commitment on the part of IRS’ Deputy Commissioner to establish a tracking system for taxpayer complaints. Such a system has the potential to greatly improve IRS’ controls to protect against taxpayer abuse and better ensure that taxpayers are treated properly. In exploring how IRS could satisfy a mandate included in the recently enacted Taxpayer Bill of Rights 2 to report annually to Congress on employee misconduct and taxpayer complaints, IRS recognized and acknowledged that such a mandate could not be satisfied with its existing information systems and that a definition for “taxpayer complaints” would be necessary, along with sufficient related management information to ensure that complaints are identified, addressed, and analyzed to prevent their recurrence. Although IRS said it still believes the term “taxpayer abuse” is misleading, inaccurate, and inflammatory, IRS decided to use the basic elements that we used in our 1994 report definition for taxpayer abuse as a starting point to develop a definition for taxpayer complaints. The basic elements from our report included when (1) an IRS employee violated a law, regulation, or the IRS Rules of Conduct; (2) an IRS employee was unnecessarily aggressive in applying discretionary enforcement power; or (3) IRS’ information systems broke down, e.g. when taxpayers repeatedly received tax deficiency notices and payment demands despite continual contacts with IRS to resolve problems with their accounts. With input from members of IRS’ Executive Committee, an IRS task group decided upon the following definition for taxpayer complaints: an allegation by a taxpayer or taxpayer representative that (1) an IRS employee violated a law, regulation, or the IRS Rules of Conduct; (2) an IRS employee used inappropriate behavior in the treatment of taxpayers while conducting official business, such as rudeness, overzealousness, excessive aggressiveness, discriminatory treatment, intimidation, and the like; or (3) an IRS system failed to function properly or within prescribed time frames. This definition was endorsed by the IRS Deputy Commissioner in a June 17, 1996, memorandum. IRS has decided to use the Problem Resolution Office Management Information System (PROMIS), with modifications, as a platform for compiling information about taxpayer complaints involving inappropriate employee behavior and systemic breakdowns. However, numerous decisions remain concerning how to track and assess the handling of all taxpayer complaints. For example, IRS already has two systems that are designed to capture data relevant to alleged employee misconduct. PROMIS is currently designed to capture data relevant to possible systemic breakdowns. The two systems capturing misconduct information, however, do not capture data in a manner that is comparable to one another or to PROMIS. IRS officials readily concede that at present, there is no IRS information system designed to capture data relevant to complaints of inappropriate employee behavior. They realize that to capture and compile information relevant to all three elements of the taxpayer complaints definition in a comparable and uniform manner will be a considerable challenge, especially for the highly subjective element involving inappropriate employee behavior. However, the officials assured us that they are now committed to rising to that challenge. While we are encouraged by IRS’ commitment, we recognize the formidable challenge IRS faces to capture complete, consistent, and accurate information about the IRS definition for taxpayer complaints. Rising to the challenge, however, is critical for IRS to have adequate controls to protect against taxpayer abuse as well as being able to satisfy its new requirement to annually report to Congress on employee misconduct and taxpayer complaints. Since our 1994 study, IRS has initiated various actions to implement our recommendations, as described in appendix I. For example, among other actions, IRS has initiated the following : Regarding unauthorized employee access to computerized taxpayer accounts, IRS (1) issued a 12-point Information Security Policy to all employees in January 1995, stressing the importance of taxpayer privacy and the security of tax data and (2) has begun development of an Information System Target Security Architecture to include management, operational, and technical controls for incorporation in the Tax System Modernization Program—a long-term effort to modernize IRS’ computer and telecommunications systems. Regarding the improper use and processing of taxpayer cash payments, IRS (1) included statements in its 1995 forms and instructions encouraging taxpayers to make payments with either a check or money order rather than cash and (2) is instructing its managers to conduct periodic unannounced reconciliations of cash receipts used by the IRS staff who collect taxes from taxpayers. Regarding the need for IRS to notify employers of the potential liability of their officers and employees for a trust fund recovery penalty when businesses fail to collect or pay withheld income, employment, or excise taxes, IRS has included notices of this liability in both Publication 334, “Tax Guide for Small Businesses” and Circular E, “Employer’s Tax Guide.” In addition to these actions, IRS has recently undertaken other initiatives in anticipation of some provisions included in the recently enacted Taxpayer Bill of Rights 2. In January 1996, IRS announced a series of initiatives designed to reduce taxpayer burden and make it easier for taxpayers to understand and exercise their rights. These initiatives included (1) enhanced powers for the Taxpayer Ombudsman, such as explicit authority to issue a refund to a taxpayer to relieve a severe financial hardship; (2) notification of a spouse regarding any collection action taken against a divorced or separated spouse for a joint tax liability; (3) increased computerized record storage and electronic filing options for businesses; (4) expedited appeals procedures for employment tax issues; and (5) a test of an appeals mediation procedure. IRS has also started to use information on taxpayer problems captured in PROMIS. IRS recently used this system to identify the volume of taxpayer problems categorized by various major issues, such as refund inquiries, collection actions, penalties, and the earned income tax credit. The Ombudsman has requested IRS’ top executives to review the major issues identified for their respective offices or regions in an effort to devise cost-effective ways to reduce these problems. While we did not test the implementation of these various initiatives, they appear to be conceptually sound and thus we believe that, if effectively implemented, they should help to strengthen IRS’ overall controls and procedures to identify, address, and prevent the recurrence of taxpayer abuse. It is not possible to readily determine the extent to which allegations of taxpayer abuse are received and investigated from the information systems maintained by IRS, OIG, and DOJ. These systems were designed as case tracking and resource management systems intended to serve the management information needs of particular functions, such as IRS’ Internal Security Division. None of these systems include specific data elements for “taxpayer abuse;” however, they contain data elements that encompass broad categories of misconduct, taxpayer problems, or legal actions. Without reviewing specific case files, information contained in these systems related to allegations and investigations of taxpayer abuse is not easily distinguishable from information on allegations and investigations that do not involve taxpayers. Consequently, as currently designed, these systems cannot be used individually or collectively to account for IRS’ handling of all instances of alleged taxpayer abuse. Officials of the respective organizations indicated that several information systems might include information related to taxpayer abuse allegations—five maintained by IRS, two by DOJ, and one by OIG—as described in appendix II. For example: Two of the IRS systems—the Internal Security Management Information System (ISMIS) and the Automated Labor and Employee Relations Tracking System (ALERTS)—capture information on cases involving employee misconduct, which may in some cases involve taxpayer abuse. ISMIS is used to determine the status and outcome of Internal Security investigations of alleged employee misconduct; ALERTS is used to track disciplinary actions taken against employees. While ISMIS and ALERTS both track aspects of alleged employee misconduct, these systems do not share common data elements or otherwise capture information in a consistent manner. IRS also has three systems that include information on concerns raised by taxpayers. These systems include two maintained by the Office of Legislative Affairs—the Congressional Correspondence Tracking System and the Commissioner’s Mail Tracking System—as well as PROMIS, which we described earlier. The two Legislative Affairs systems basically track taxpayers’ inquiries, including those made through congressional offices, to ensure that responses are provided by appropriate IRS officials. PROMIS tracks similar information to ensure that taxpayers’ problems are resolved and to determine whether the problems are recurring in nature. OIG has an information system known as the OIG Office of Investigations Management Information System (OIG/OIMIS) that is used to track the status and outcomes of OIG investigations as well as the status and outcomes of actions taken by IRS in response to OIG investigations and referrals. As discussed further in the next section of this report, most OIG investigations do not involve allegations of taxpayer abuse because those IRS employees that OIG typically investigates—primarily senior-level officials—usually do not interact directly with taxpayers. DOJ has two information systems that include data that may be related to taxpayer abuse allegations and investigations. The Executive Office of the U. S. Attorneys maintains a Centralized Caseload System that is used to consolidate the status and results of civil and criminal prosecutions conducted by offices of the U. S. Attorney throughout the country. Cases involving criminal misconduct by IRS employees would be referred to and may be prosecuted by the U.S. Attorney in the particular jurisdiction in which the alleged misconduct occurred. The Tax Division also maintains a Case Management System that is used for case tracking, time reporting, and statistical analysis of litigation cases conducted by the Tax Division. Lawsuits against either IRS or IRS employees are litigated by the Tax Division, with representation provided to IRS employees if the Tax Division determines that the actions taken by the employees were within the scope of employment. The officials familiar with these systems stated that, while the systems include data elements in which potential taxpayer abuse may have occurred, they do not include a specific data element for taxpayer abuse, which could be used to easily distinguish abuse allegations from others not involving taxpayers. For example, officials from the Executive Office for the U. S. Attorneys stated that the public corruption and tort categories of their Case Management System may include instances of taxpayer abuse, but the system could not be used to identify such instances without a review of individual case files. From our review of data from these systems, we concluded that none of them, either individually or collectively, have common or comparable data elements that can be used to identify the number or outcomes of taxpayer abuse allegations or related investigations and actions. Rather, each system was developed to provide information for a particular organizational function, usually for case tracking, inventory, or other managerial purposes relative to the mission of that particular function. While each system has data elements that could reflect how taxpayers have been treated, as described in appendix II, the data elements vary and may relate to the same allegation and same IRS employee. Without common or comparable data elements and unique allegation and employee identifiers, these systems do not collect information in a consistent manner that could be used to accurately account for all allegations of taxpayer abuse. OIG is responsible for investigating allegations of misconduct and waste, fraud, and abuse involving senior IRS officials, GS-15s and above, as well as IRS Inspection employees. OIG also has oversight responsibility for the overall operations of Inspection. Since November 1994, OIG has had increased flexibility for referring allegations involving GS-15s to IRS for investigation or administrative action. This was due to resource constraints and an increased emphasis by OIG on investigations involving criminal misconduct and procurement fraud across all Treasury bureaus. In fiscal year 1995, OIG conducted 44 investigations—14 percent of the 321 allegations it received—for the most part, implicating senior IRS officials. OIG officials stated that these investigations rarely involved allegations of taxpayer abuse because senior IRS officials and Inspection employees usually do not interact directly with taxpayers. OIG and Inspection have a unique relationship, relative to that of OIG and other Treasury bureau audit and investigative authorities. The IRS Chief Inspector, who reports directly to the IRS Commissioner, is responsible for IRS internal audits and investigations as well as coordinating Inspection activities with OIG. Inspection is to work closely with OIG in planning and performing its duties, and is to provide information on its activities and results to OIG for incorporation into OIG’s semiannual report to Congress. Disputes the IRS Chief Inspector may have with the Commissioner can be resolved through OIG and the Secretary of the Treasury, to whom OIG reports. The Department of the Treasury established the Office of the Inspector General (IG) consistent with the authority provided in the “Inspector General Act of 1978,” although Treasury already had internal audit and investigation capabilities for the Department as well as its bureaus. The existing capabilities included Inspection, which was responsible for all audits and investigations of IRS operations. Among OIG’s express authorities were the investigation of allegations implicating senior IRS officials and the oversight of Inspection’s audit and investigative activities. OIG resources to discharge these responsibilities were augmented in fiscal year 1990, by the transfer of 21 staff years from IRS’ appropriations to that of OIG. The IG Act was amended in 1988 with special provisions included to, among other things, ensure the privacy of tax-related information. These provisions did not limit OIG’s authority but required an explicit accounting of OIG’s access to tax-related information in performing audits or investigations of IRS operations. The OIG’s authorities were also articulated in Treasury Order 114-01 signed by the Secretary of the Treasury in May 1989. Specifically related to OIG investigative authorities, in September 1992, the Treasury IG issued Treasury Directive 40-01 summarizing the authority vested in OIG and the reporting responsibilities of various Treasury bureaus. Among the responsibilities of law enforcement bureaus, including IRS, are to (1) provide a monthly report to OIG concerning significant internal investigative and audit activities, (2) notify OIG immediately upon receiving allegations involving senior officials or internal affairs or inspection employees, and (3) submit written responses to OIG detailing actions taken or planned in response to OIG investigative reports and OIG referrals for agency management action. Under procedures established in a Memorandum of Understanding between OIG and IRS in November 1994, the requirement for immediate referrals to OIG of all misconduct allegations was reiterated and supplemented. OIG has the discretion to refer any allegation to IRS for appropriate action, i.e., either investigation by Inspection or administrative action by IRS management. If IRS officials believe that an allegation referred by OIG warrants OIG attention, they may refer the case back to OIG requesting that OIG conduct an investigation. OIG officials advised us that under the original 1992 directive, they generally handled most allegations implicating Senior Executive Service (SES) and Inspection employees, while reserving the right of first refusal on GS-15 employees. Under the procedures adopted in 1994, which were driven in part by resource constraints and OIG’s need to do more criminal misconduct and procurement fraud investigations across all Treasury bureaus, OIG officials stated they have generally referred allegations involving GS-15s and below to IRS for investigation or management action. The same is true for allegations against any employees, including those in SES, involving administrative matters and allegations dealing primarily with tax disputes. OIG officials said that a determination is made by OIG after a preliminary review of the merits of the allegations whether to investigate, refer to IRS to either investigate or take administrative action, or to take no action at all. Table 1 summarizes the number and disposition of allegations received by OIG involving IRS in fiscal year 1995. In fiscal year 1995, OIG received 321 allegations, many of which involved senior IRS officials. After a preliminary review, OIG decided no action was warranted on 71 of the allegations, referred 201 to IRS—either for investigation or administrative action—investigated 44, and closed 5 others for various administrative reasons. OIG officials stated that, based on their investigative experience, most allegations of wrongdoing by IRS staff that involve taxpayers do not involve senior level IRS officials or Inspection employees. Rather, these allegations typically involve those IRS Examination and Collection employees who most often interact directly with taxpayers. OIG officials are to assess the adequacy of IRS’ actions in response to OIG investigations and referrals as follows: (1) IRS is required to make written responses on actions taken within 90 days and 120 days, respectively, on OIG investigative reports of completed investigations and OIG referrals for investigations or management action; (2) OIG investigators are to assess the adequacy of IRS’ responses before closing the OIG case; and (3) OIG Office of Oversight is to assess the overall effectiveness of IRS Inspection capabilities and systems through periodic operational reviews. In addition to assessing IRS’ responses to OIG investigations and referrals, each quarter the IG, Deputy IG, and Assistant IG for Investigations meet to brief the IRS Commissioner, Deputy Commissioner, and Chief Inspector on the status of allegations involving senior IRS officials, including those being investigated by OIG and those awaiting IRS action. While officials from both agencies agree that the arrangement is working well to ensure allegations involving senior IRS officials and Inspection employees are being handled properly, OIG officials expressed some concern with the amount of time IRS typically takes to respond with actions on OIG investigations and referrals. IRS officials acknowledged that responses are not always within OIG time frames because, among other reasons, determinations about taking disciplinary actions and imposing such actions may take a considerable amount of time. Also, they said some cases must be returned for additional development by OIG, which may prolong the time for completion. The IRS officials, however, also suggested that actions on OIG referrals are closely monitored as evidenced by their inclusion in discussions during quarterly IG briefings with the Commissioner. While we did not independently test the effectiveness of this OIG/IRS arrangement, we found no evidence to suggest these allegations are not being properly handled. IRS has taken specific steps in relation to certain recommendations made in our 1994 report and initiated other actions to strengthen its controls over taxpayer abuse by its employees. Even so, at this time, we remain unable to determine the adequacy of IRS’ system of controls to identify, address, and prevent instances of abuse. However, we are encouraged by IRS’ recent decision to develop a taxpayer complaint tracking system that essentially adopts the definition of taxpayer abuse included in our 1994 report as a starting point for defining the elements of taxpayer complaints. We believe this is a critically important commitment that IRS must sustain. If effectively designed and implemented, IRS should have an enhanced ability to identify, address, and protect against the mistreatment of taxpayers by IRS employees or the tax system in general. While we are encouraged by IRS’ commitment, we also recognize the formidable challenge IRS faces in developing an effective complaints tracking system. IRS needs a more effective complaints tracking system because, while IRS, OIG, and DOJ information systems contain data about the treatment of taxpayers, the data relevant to employee misconduct or taxpayer complaints are not readily or easily distinguishable from other allegations that do not involve taxpayers. The systems do not have the same employee identifiers or common data elements. Nor are the data captured in a consistent manner that allows for consolidation relative to the number or outcome of taxpayer complaints using the definition IRS is adopting. Given IRS’ recent commitment and related efforts it has under way to design and implement a taxpayer complaints tracking system and the recently enacted Taxpayer Bill of Rights 2, we are making no new recommendations at this time. The IRS Chief, Management and Administration commented on a draft of this report by letter dated August 9, 1996, (see app. III) in which he reiterated IRS’ commitment to preserving and enhancing taxpayers’ rights. The Treasury’s OIG and DOJ also provided technical comments, which we incorporated in this report where appropriate. As agreed with your staff, unless you announce the contents of this report earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member, Senate Committee on Finance; the Chairman and the Ranking Minority Member, Senate Committee on Governmental Affairs; and the Chairman and the Ranking Minority Member, House Committee on Ways and Means. We will also send copies to other interested congressional committees, the Commissioner of Internal Revenue, the Treasury Inspector General, the Attorney General, and other interested parties. We will also make copies available to others upon request. The major contributors to this report are listed in appendix IV. If you have any questions concerning this report, please contact me at (202) 512-9044. Establish a servicewide definition of taxpayer abuse or mistreatment and identify and gather the management information needed to systematically track its nature and extent. IRS has recently established a definition for “taxpayer complaints” and is now committed to establishing a complaints tracking process. Ensure that Tax Systems Modernization provides the capability to minimize unauthorized employees access to taxpayer information in the computer system that eventually replaces the Integrated Data Retrieval System. Issued a 12-point Information Security Policy to all IRS staff; published “High-Level Security Requirements;” and started development of an Information System Target Security Architecture. Revise the guidelines for information gathering projects to require that specific criteria be established for selecting taxpayers’ returns to be examined during each project and to require that there is a separation of duties between staff who identify returns with potential for tax changes and staff who select the returns to be examined. Issued an updated memorandum to field staff regarding the highly sensitive nature of information gathering projects. Reconcile all outstanding cash receipts more often than once a year and stress in forms, notices, and publications that taxpayers should use checks or money orders whenever possible to pay their tax bills, rather than cash. IRS is instructing its managers to conduct random unannounced reconciliations of cash receipts used by IRS staff who receive cash payments from taxpayers. Revised Publication 594, “Understanding the Collection Process,” Publication 17, “Your Federal Income Tax,” and the 1995 1040 tax package to encourage taxpayers to pay with checks or money orders, rather than cash. Better inform taxpayers about their responsibility and potential liability for the trust fund recovery penalty by providing taxpayers with special information packets. Revised Publication 334, “Tax Guide for Small Business,” and Circular E, “Employer’s Tax Guide,” to explain the potential liability for the trust fund recovery penalty if amounts withheld are not remitted to the government; and started including Notice 784, “Could You Be Personally Liable for Certain Unpaid Federal Taxes?” with the first balance due notice for business taxes. Provide specific guidance for IRS employees on how they should handle White House contacts other than those involving tax checks of potential appointees or routine administrative matters. No actions taken or planned. Because we did not find instances of improper contacts, IRS is of the opinion that current procedures covering third-party contacts are adequate. Seek ways to alleviate taxpayers’ frustration in the short term by analyzing the most prevalent kinds of information-handling problems and ensuring that requirements now being developed for Tax Systems Modernization information systems provide for long-term solutions to those problems. Requested top executives to review major issues the Ombudsman identified via the Problem Resolution Program that have resulted in repeat taxpayer problems. Internal Security management use this system to track the status of investigations and for operational and workload management. Labor Relations staff use this system to track the status and results of possible disciplinary action relative to IRS employee behavior. IRS - Problem Resolution Office Management Information System (PROMIS) Problem Resolution Office staff use this system to monitor the status of open taxpayer problems to generate statistics on the volume of problems received by major categories. IRS - Commissioner’s Mail Tracking System Legislative Affairs staff use this system to track correspondence to the Commissioner and other IRS office heads/executives. Legislative Affairs staff use this system to track correspondence from congressional sources and from referrals by the Treasury Department and the White House. OIG management and desk officers use the system to monitor the status of OIG investigations and to monitor whether required responses to OIG investigations and referrals to the Treasury bureaus, such as IRS, have been received. DOJ EOUSA - Centralized Caseload System EOUSA management use the system to monitor the status and results of civil and criminal prosecutions and to oversee field office caseloads. Tax Division management uses the system to monitor the status and results of civil and criminal cases, manage attorney caseloads, and prepare internal and external reports, such as for the Office of Management and Budget and the Congress. Rachel DeMarcus, Assistant General Counsel Shirley A. Jones, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the: (1) adequacy of the Internal Revenue Service's (IRS) controls to protect against abuse of taxpayers; (2) extent of information available concerning abuse allegations received and investigated by IRS, the Department of the Treasury Office of the Inspector General (OIG), and the Department of Justice (DOJ); and (3) OIG role in investigating abuse allegations. GAO found that: (1) the adequacy of IRS controls against taxpayer abuse is uncertain because IRS does not have the capability to capture management information on taxpayer abuse; (2) IRS is establishing a tracking system to handle taxpayer complaints and reviewing its management information systems to determine the best way to capture relevant information for the complaint system; (3) the tracking system will enable IRS to better identify instances of taxpayer abuse and ensure that actions are taken to prevent their recurrence; (4) IRS is improving controls over its employees' access to computerized taxpayer accounts, establishing an expedited appeals process for some collection actions, and classifying recurring taxpayer problems by major issues; (5) it is not possible to determine the extent to which allegations of taxpayer abuse are received and investigated, since IRS, OIG, and DOJ information systems do not include specific data elements on taxpayer abuse; (6) OIG has increased the number of investigations involving senior IRS employees' alleged misconduct, fraud, and abuse; (7) OIG refers most of these allegations to IRS for investigation and administrative action; and (8) IRS is taking a considerable amount of time to respond to OIG investigations and referrals regarding senior IRS officials' disciplinary actions.
RTC’s sales centers planned and carried out land sales initiatives. Asset marketing specialists at the National Sales Center and in regional sales centers developed disposition plans, identified the assets to be offered for sale in the initiatives, and obtained approval from RTC management to carry out the sales initiatives. Initiatives offering assets with a combined book value in excess of $250 million required RTC headquarters approval. Field offices were permitted to approve their own sales initiatives when the book value of the assets being offered totaled $250 million or less. In December 1993, RTC issued its business plan. In developing the plan, RTC used a standard methodology to comparatively evaluate the net recoveries from similar asset types sold through different disposition methods. Similar expense data, but not all expense data, were gathered for relevant transactions and, according to RTC, standard methodologies were used to evaluate all types of equity partnerships, large sealed bids/portfolio sales, and auctions, respectively. However, at the time these evaluations were done, the most significant land sales initiatives using alternative disposition methods, such as the Multiple Investor Fund and equity partnership structures, had not yet closed. Therefore, land was not included in the business plan analysis as a separate asset type. In 1993, RTC decided to test the equity partnership structure for land (Land Fund I). It then became more important for RTC to assess relative recoveries of distinct disposition methods. Also, in December 1993, the RTC Completion Act of 1993 established various requirements for the disposition of real property, including land and nonperforming loans secured by real estate. The act required that before such assets are offered in a bulk transaction, RTC must determine in writing that a bulk transfer would maximize the net recovery to RTC while providing an opportunity for broad participation by qualified bidders, including minority- and women-owned businesses. The required written justifications are to be included in the case submitted to RTC management to obtain approval for each land sales initiative. We reviewed RTC’s land disposition activities to determine how RTC was dealing with its land assets inventory. Our objectives were to determine whether RTC had (1) developed and implemented a strategy for disposing of its land assets and (2) assessed the results of its land sales initiatives to identify the most cost-effective disposition methods and best practices. To accomplish our first objective, we reviewed the November 1991 Land Task Force strategy paper and RTC’s directive implementing the land disposition strategy. We interviewed the head of the task force to discuss the (1) basis for RTC’s strategy, (2) results of the land inventory evaluations done by the task force, and (3) land sales initiatives RTC planned to implement in 1993. We also interviewed RTC headquarters officials in Washington, D.C.; and contacted field office officials in Atlanta; Dallas; Denver; Kansas City; Newport Beach, CA; and Valley Forge, PA. We obtained information on the implementation of RTC’s land disposition policy and related policies and procedures, inventories of land and loans secured by land, land sales initiatives and their results, and land sales initiatives in the planning stage. To accomplish our second objective, we reviewed 6 of the 13 land sales initiatives RTC’s National Sales Center planned to implement in 1993. These initiatives were judgmentally selected to represent a cross-section of the types of land sales strategies used by RTC, the ways RTC pooled assets for land sales initiatives, and size of initiatives in terms of number of assets offered for sale. The selected initiatives included five different sales strategies and five different ways to pool the assets. The size of the initiatives ranged from 35 to 410 assets. (App. I lists the 1993 National Sales Center initiatives and identifies those we reviewed.) We also reviewed an auction—the Pride of Texas—planned by the Dallas field office. We selected this initiative because it provided an example of a field office initiative involving land assets located in a local area with national advertising. For each of the seven land sales initiatives we selected for review, we interviewed the RTC asset marketing specialists in Washington, D.C., and one in Dallas who planned and executed the selected initiatives. These individuals provided documents relating to each initiative, including case approvals and listings of assets reserved for the initiatives. In these interviews, we also discussed the availability and sources of sales expense data for the initiatives we reviewed and obtained copies of all expense data that these asset marketing specialists had in their files. We focused on direct costs associated with the initiatives and not on other costs incurred by RTC, such as indirect overhead and asset management and disposition fees, because RTC would have incurred these costs even if the bulk sales had not been implemented. The costs we attempted to determine are listed in appendix II. We also attempted to obtain cost data that RTC could not provide from the contractors it had hired to carry out the initiatives we reviewed. We contacted 11 RTC contractors providing financial advisory, due diligence, and auctioneer services for the selected land sales initiatives to get information about the services they provided and the fees they billed to RTC for these services. We also contacted RTC’s Office of Inspector General to discuss work done on contractor billings for services provided on two of the initiatives we reviewed. Finally, we interviewed RTC headquarters officials from the National Sales Center, Office of Contract Operations, Management Information Division, and Department of Corporate Finance, as well as field office officials in Dallas, to determine whether the results of individual land sale initiatives were evaluated. We also reviewed reports on the results of 1992, 1993, and 1994 program compliance reviews to determine whether reviewing officials were assessing compliance with the land sales initiative policy directive. On February 6, 1995, we met with RTC’s Vice President for Asset Marketing, RTC officials representing the National Sales Center, the Office of Contracts, and the Chief Financial Officer to discuss a draft of this report. Their comments were considered and have been incorporated into the report where appropriate. On March 3, 1995, RTC provided written comments on a draft of this report, which are evaluated in the agency comments section and elsewhere in the report where appropriate. RTC’s written comments are reprinted in appendix IV. We did our work between January 1993 and December 1994 in accordance with generally accepted government auditing standards. Until the summer of 1991, RTC did not place a high priority on the disposition of land assets. Instead, priority was given to other asset categories that could be disposed of quickly, such as securities and residential mortgages—of which RTC had a large inventory—and commercial and residential real estate that had greater holding costs. The experience RTC gained through the disposition of other types of hard-to-sell assets, such as nonperforming commercial real estate loans, paved the way for structuring of land sales. Recognizing the challenge posed by land assets, RTC formed a land task force in the summer of 1991 to analyze its land inventory and develop a strategy for disposing of these assets. The task force estimated that, continuing at RTC’s then average annual rate of land sales, it would take RTC over 16 years to dispose of its remaining land assets. Initially, land was offered on a sealed bid or auction basis, and later in various forms of equity partnerships. RTC had not yet tested the market for equity partnerships when the Land Task Force issued its strategy paper. In its November 1991 strategy paper, the task force recommended that RTC use specific types of sales methods to dispose of land assets and select assets for initiatives that were similar in size, type, and location to respond to investor preferences. The specific sales methods recommended by the task force included (1) auctions for land assets with book values under $1 million, (2) local promotional campaigns for land assets with book values ranging from $1 million to $5 million, (3) sealed bid offerings for land assets with book values over $5 million, and (4) solicited proposals from qualified investors for portfolios of large land assets with an aggregate book value in excess of $100 million. In May 1992, RTC issued its land sales directive, Circular 10300.23 entitled Land Sales Strategies and Programs. This directive incorporated the task force’s recommendations into RTC’s guidelines for establishing and implementing land sales strategies. In implementing the task force’s recommendation to solicit proposals from investors, the directive specified two possible initiatives: (1) multiple investor funds for pools of land assets ranging from $1 billion to $2 billion in total book value and (2) competitive solicitations of qualified individual investors for large portfolios of land assets with an aggregate book value of less than $1 billion. The directive required RTC field offices to identify available land assets and develop plans for their disposition. These plans were to include (1) an analysis of available land assets, (2) a list of the land sale initiatives planned or in process and their sales goals, and (3) a separate marketing plan for each individual land asset with a book value of $5 million or more. The directive emphasized the importance of ensuring that land assets be carefully evaluated before being included in a specific sales initiative to ensure that the proper sales method is selected. In choosing a sales method for an initiative, the offices were to select the one that was most appropriate for the types of land assets to be offered for sale. The directive also required that the sales method selected satisfy RTC’s mandate to achieve the highest net recovery on the sale of assets while avoiding disruptions in local real estate markets. Finally, the directive required the land task force to (1) review field office initiative plans for consistency and compliance with recommended land policies and sales methods and (2) evaluate the results of land sales initiatives at the completion of each initiative and identify which sales methods are most effective. Using the various sales methods set forth in the land sales strategy directive, RTC disposed of about $16 billion (book value) in land and loans secured by land during 1993 and the first half of 1994. RTC figures showed that it had about $4.6 billion (book value) in land and nonperforming loans secured by land remaining in its asset inventory as of April 30, 1994. By the end of February 1995, RTC indicated that it had reduced its inventory of these types of assets to about $850 million. RTC has until December 31, 1995, to complete any land sales initiatives it undertakes. The RTC Completion Act of 1993 set this date for RTC to cease its operations. Any assets remaining in RTC’s inventory at that time will be transferred to the Federal Deposit Insurance Corporation (FDIC) for disposition. As part of the planning process for transitioning to FDIC, RTC is to identify its best practices, which should be considered for use by FDIC. RTC believes that the recovery analysis it is doing on the various disposition methods it uses will help it accomplish this task. In addition, RTC believes this analysis should help FDIC as it considers alternate disposition methods for its own inventory of land and other assets and for similar assets it inherits from RTC in December 1995. RTC policy required the results of land sales initiatives to be evaluated. However, RTC did not (1) establish a standard methodology for making the required evaluations, (2) perform the evaluations, or (3) take adequate steps to ensure that these evaluations were done. Also, RTC did not develop a formal procedure to capture the expense data needed to calculate the net recoveries on the sale of land assets. As a result, RTC could not assess the relative cost effectiveness of the various sales methods it used. Relative cost effectiveness was a key component to be used in the required evaluations since they were meant to identify the most effective methods. RTC also did not have the data needed to analyze expense variations and thus could not use this information to better manage future land sales initiatives. In its May 1992 directive on land sales strategies, RTC underscored the importance of sales initiative evaluations in satisfying its mandate to maximize net recoveries on the sale of assets under its control. It required that the results of land sales initiatives be evaluated at the completion of each initiative to identify the most effective sales methods. Nevertheless, a standard methodology to evaluate initiative results was not developed by either the land task force, which was to do the required evaluations, or other units within RTC. A standard methodology is necessary to ensure that RTC collects and considers similar data for each initiative to consistently assess the results of the initiatives to identify the most cost-effective sales techniques and best practices. Furthermore, no evaluations were done by RTC staff to comply with the land sales directive requirement. RTC normally uses its program compliance reviews to evaluate the various RTC offices’ compliance with RTC policies and procedural requirements in executing the Corporation’s various business functions. One of the purposes of the program compliance review process is to identify procedural deficiencies that hamper or prevent the implementation of policy requirement. However, our review of the program compliance reports showed that these reviews, which have been done at least annually since RTC’s inception, were not used to determine whether the land sales initiative evaluation requirement was being implemented throughout RTC. Expense data are needed, by definition, to compute net recoveries from the land sales initiatives and evaluate the results of these initiatives compared to other disposition methods. While RTC management acknowledged the importance of evaluating sales initiative results, they did not establish adequate policies and procedures to ensure that all essential actual expense data needed to make the net recovery calculations were collected. As a result, RTC did not compute the net recoveries for the land sales initiatives, identify the most cost-effective initiatives and best practices, refine its land disposition strategies, or analyze expense variations to better manage future land sales initiatives. Because RTC procedures did not require them to do so, asset marketing specialists generally did not monitor total sales initiative expenses or use the land sales initiative budgets to control costs and identify costly practices. Also, RTC’s systems could not generate expense reports on the individual land sales initiatives. RTC’s Financial Management System (FMS), except for auctions, lacked codes needed to sort expense data by sales initiative. RTC expanded the list of FMS codes in 1994; however, codes were not set up for all sales initiatives planned for 1994 and 1995. The lack of (1) FMS codes and (2) formal compilation of actual sales initiative expenses prevented RTC and us from getting data by sales initiative to evaluate and compare the results between and among sales initiatives. Two of the five asset marketing specialists who managed the National Sales Center initiatives we reviewed provided partial expense data obtained from a portfolio sales adviser and other contractors hired to help carry out the initiatives. However, the three other specialists were not able to provide similar data. The National Sales Center maintained a system with some expense data, including contracted financial sales advisory and due diligence services and some marketing expenses. However, this system did not capture data for other expenses, such as legal services and advertising. Because RTC did not collect complete data for all the expenses incurred to implement individual land sales initiatives, we attempted to obtain the missing data from other sources for the land sales initiatives we reviewed. We identified, primarily from contractor records, almost $49 million in expenses incurred on the seven land sales initiatives we reviewed. However, we were unable to locate complete data for all of the expenses for each of the initiatives. Mainly, we located data on contracting fees incurred to carry out the initiatives as well as certain other sales initiative expenses incurred for legal services, advertising, and the facilities used to conduct the sale. We included all amounts invoiced by RTC’s contractors that we located. Some of the due diligence fees, totaling millions of dollars, for several initiatives were being disputed by RTC at the time of our review; and we were not certain how the fees dispute would be resolved. For the East Coast Land Sale, we were unable to break out invoiced expenses for due diligence services between that initiative and other initiatives commingled in the billing. For three of the seven initiatives, data we located lacked the detail needed to identify amounts for several expense categories, such as marketing brochures, asset information packages, the due diligence library, and travel, due to commingling of expenses. We identified expense data for most of the expense categories for five of the seven land sale initiatives we reviewed. For these five initiatives, there were large variations in the amounts spent within the various expense categories as well as in the total expenses RTC incurred. Some of the variation within the expense categories and among sales initiatives can be attributed to differences in the numbers, locations, and quality of assets included in the initiatives. However, because RTC did not do comparative analyses, explicit reasons for most of the variations were not determined. In September 1993, we reported on RTC’s lack of adequate evaluation of sales program results and its failure to collect essential cost data needed to measure program effectiveness. We said that if RTC had accurate information on asset characteristics, revenues, expenses, holding periods, gross and net proceeds, and sales methods by asset type, it could more effectively manage its disposition program and evaluate the results of its various sales methods. We concluded that data limitations impaired RTC’s analysis of the sales methods it used and recommended that RTC improve its methods for collecting and summarizing asset sales and financial data to maximize recoveries on its hard-to-sell assets. We also reported, in December 1993, that there were substantial variations in fees paid for similar loan servicing services. We concluded that without information on all the costs under its loan servicing contracts, RTC could not effectively monitor the fees charged by contractors or establish cost-effective fee structures. We recommended that RTC routinely collect the information needed to monitor loan servicing fees and expenses and use this information to develop cost-effective compensation structures in future contracts. RTC has implemented the recommendations we made in that report. It is monitoring its loan servicing fees and expenses and using this information in awarding new contracts. On June 28, 1994, we briefed RTC management on the results of our work on land sales initiatives. In response to this briefing and our September 1993 data limitations report recommendation that RTC improve its methods for collecting and summarizing asset sales and financial data, RTC took actions to address the concerns we raised. RTC acknowledged that although information regarding the amount of gross sales proceeds from past multiasset sales transactions was readily available, the amount of corresponding sales expenses can only be determined after substantial research. It also acknowledged that documentation of estimated and actual sales expenses for each multiasset sale would be useful in determining the effectiveness of different sales methods and for monitoring sales expense data. On August 15, 1994, RTC issued a directive, Circular 10300.39 entitled Multi-Asset Sales Transactions Budgets, to establish procedures for tracking multiasset sales expenses. This directive applies to all multiasset sales initiatives, regardless of type, developed by RTC or any of its contractors for the disposition of loans, real estate, or other assets. The procedures in the new directive, which became effective for all relevant sales cases approved after July 31, 1994, require (1) a sales budget to be prepared for each multiasset sales initiative that must be submitted with the case memorandum requesting authority to proceed with the initiative and (2) actual sales expenses to be compiled and entered onto a copy of the original budget no later than 90 days after the sale closing (transfer of title). To ensure consistency, a standard multiasset sales transaction budget format (see app. III) was developed that must be used to record budget and expense information. The directive assigns responsibility for ensuring that the sales budget is completed and updated to the individual responsible for managing the initiative. This individual is to coordinate with legal, contracting, and other parties as needed to obtain estimated and final sales-related expense data. RTC has developed a strategy for disposing of its remaining land assets, and during 1993 it implemented a variety of land sales initiatives to dispose of these assets. Although RTC required each land sales initiative to be evaluated, it did not develop a standard methodology for these evaluations, nor were the required evaluations done. Consequently, RTC could not assess the relative cost effectiveness of the various land sales methods it used. Furthermore, RTC did not assess the implementation of the evaluation requirement through its program compliance reviews to ensure that policies and procedural requirements were being executed properly and consistently. Had RTC used these reviews, it likely would have recognized that there were procedural deficiencies that were preventing the implementation of the land sales initiative evaluation requirement. Until August 1994, RTC did not have adequate policies and procedures to collect the essential expense data needed to compute the net recoveries from individual land sales initiatives. As a result, RTC could not identify the most cost-effective initiatives, refine its land disposition strategies based on results, or analyze expense variations to better manage future land sales initiatives. We believe it is important that RTC evaluate its land sales initiatives because they would provide valuable best practices information that would be of interest to FDIC as it decides which, if any, RTC asset disposition strategies it may want to adopt as RTC’s operations transition into FDIC. In August 1994, RTC issued procedures requiring sales budgets to be prepared and data to be collected on actual sales expenses. These procedures, if properly implemented, should provide the data RTC needs to evaluate the results of land sales initiatives, including those focused specifically on land and nonperforming loans secured by land. The original sales budget should enable RTC to better determine the appropriate delegated authority approval level for the sales initiatives. The actual sales expense data, along with other relevant information, should enable RTC to evaluate the effectiveness of different sales initiatives and monitor sales-related expenses to identify the most effective marketing and sales techniques. However, RTC still needs to develop a standard evaluation methodology to consistently assess the results of land sales initiatives at the completion of each land sales initiative to identify the most cost-effective sales techniques and best practices. We recommend that RTC’s Deputy and Acting Chief Executive Officer direct the Vice President of Asset Sales and Management to develop an appropriate standard methodology for evaluating the results of land sales initiatives, and ensure that required evaluations are done at the completion of each land sales initiative to identify the best sales methods, most effective marketing techniques, and promote their use on future land sale initiatives. On February 6, 1995, we met with RTC’s Vice President for Asset Marketing, RTC officials representing the National Sales Center, the Office of Contracts, and the Chief Financial Officer to discuss a draft of this report. In summary, they said that they generally concurred with the findings and conclusions as presented in the report. They offered various suggestions to clarify the discussion of their use of sales initiative budgets and their inability to compile all the actual expense data needed to do the required evaluations of individual land sales initiatives. Their comments were considered and have been incorporated into the report where appropriate. On March 3, 1995, RTC provided written comments (see app. IV) on the draft of this report. In this response, RTC agreed with our recommendations, described the actions being taken to implement them, and offered some general comments on RTC’s land disposition methods. RTC said that it has implemented (1) a standard methodology, which is being updated, for evaluating the results of all major sales initiatives and (2) a system in which the results of sales are being captured for a quarterly formal comparative recovery rate analysis report. If the sales data are not submitted after the sale, RTC said a follow-up request is sent to the staff conducting the sale. In addition, RTC said that it will evaluate enhancing its internal control review process to test for compliance with the evaluation requirement. If effectively implemented, we believe that the actions taken and planned by RTC should address the issues discussed in this report. In its general comments, RTC said that while the actual results of equity partnership structures will not be known with accuracy for years, its estimates made after transaction closings suggest that equity partnerships generally will exceed recoveries from other disposition methods. We are not in a position to comment on whether, in the long term, using equity partnerships will maximize the recoveries from asset sales. Because RTC was created as a mixed-ownership government corporation, it is not required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs, the House Committee on Government Operations, and the House and Senate Committees on Appropriations. However, we would appreciate receiving such a statement within 60 days of the date of this letter to assist in our follow-up actions and to enable us to keep the appropriate congressional committees informed of RTC activities. We are sending copies of this report to interested congressional members and committees and the Chairmen of the Thrift Depositor Protection Oversight Board and the Federal Deposit Insurance Corporation. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you or your staff have any questions concerning this report, please call me on (202) 736-0479. Richard Y. Horiuchi Peggy A. Stott The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Resolution Trust Corporation's (RTC) land disposition activities, focusing on whether RTC: (1) developed and implemented a strategy to dispose of its land assets; and (2) assessed the results of its land sales initiatives to identify the most cost-effective disposition methods and best practices. GAO found that RTC: (1) adopted a land disposition strategy in May 1992 and formed a land task force to analyze its land inventory; (2) disposed of about $16 billion in land and loans between January 1993 and June 1994, although it had about $850 million in these assets remaining unsold as of February 1995; (3) was unable to identify the most effective land disposition methods because it failed to develop a formal procedure to collect all the actual expense data related to each land sales initiative or establish a methodology for evaluating the results of each initiative; and (4) issued a directive requiring that the expenses for each multiasset sales initiative be documented, which should allow it to identify the most effective marketing and sales techniques and best practices.
NHTSA’s mission is to prevent motor vehicle crashes and reduce injuries, fatalities, and economic losses associated with these crashes. To carry out this mission, NHTSA conducts a range of safety-related activities, including setting vehicle safety standards; investigating possible safety defects and taking steps to help ensure that products meet safety standards and are not defective (through recalls if necessary); providing guidance and other assistance to states to help address traffic safety issues, such as drunk driving and distracted driving; and collecting and analyzing data on crashes. In fiscal year 2014, NHTSA’s enacted budget was $819 million. NHTSA collects and analyzes crash data for a variety of purposes, such as to determine the extent of a safety problem and what steps NHTSA should take to develop countermeasures. NHTSA collects data through detailed, in-depth investigations as well as to generate national statistics and nationally representative data, as shown in table 1. As mentioned previously, this report focuses on NASS-CDS. NHTSA collects NASS-CDS data through in-depth investigations of a sample of police-reported motor vehicle crashes that occur in the United States. The data collected through NASS-CDS are detailed and descriptive and allow NHTSA and others to assess the crashworthiness of different types of vehicles, evaluate different vehicle safety systems and designs, and understand the nature of injuries that people sustain during crashes. NHTSA uses the data collected as part of NASS-CDS for statistical analyses in its rulemaking and to estimate the size of the population that might be affected by its rulemaking. NHTSA also uses NASS-CDS data for other purposes, such as to identify existing and potential traffic-safety problems. For example, NHTSA has used NASS-CDS data to investigate patterns of roof intrusion into a vehicle resulting from real-world rollover crashes. According to NHTSA, NASS-CDS data showed that the damage and intrusion that occurred during real-world crashes was greater than the damage and intrusion that occurred during crash tests, pointing to the need to revisit NHTSA’s standards for the strength requirements for a vehicle’s roof. Others, including other federal agencies, universities, research institutions, and the automobile and insurance industries, use NASS-CDS data to understand the nature and consequences of real world crashes. For example, the National Transportation Safety Board, an independent federal agency, uses NASS-CDS data for research purposes as well as for conducting its accident investigations, whereas automobile manufacturers may use NASS-CDS data to study crash patterns and how those patterns have changed over time in order to prioritize their own research on vehicle designs and safety features. NHTSA collects NASS-CDS data and information using stratified sampling—a statistical method of sampling in which a population is divided into two or more parts (called strata) and a sample is selected from each part (or stratum). NASS-CDS, specifically, is a stratified, three- stage probability sample, as illustrated in figure 1. The first stage of the NASS-CDS sample was the selection of PSUs—the geographic locations where NHTSA collects data. NHTSA defined the PSUs so that their minimum population was approximately 50,000, and each PSU consisted of a central city, a county, a group of counties, or a portion of a large county excluding a central city. The PSUs were grouped into 12 strata based on geographic region (i.e., Northeast, South, Midwest, and West) and urbanization type (i.e., large central cities, large suburban areas, all others). The PSUs to be sampled were allocated to each stratum roughly proportional to the number of crashes in each stratum, and at least two PSUs were then selected from each stratum. As of 2014, a total of 24 PSUs comprised the NASS-CDS sample, as shown in figure 2. The second stage was the selection of police jurisdictions within the sampled PSUs. About 170 police jurisdictions across the United States are part of the NASS-CDS sample and the number of jurisdictions per PSU varies (e.g., the Seattle, Washington PSU has two police jurisdictions in its sample, whereas the King County, Washington PSU has seven). Each police jurisdiction was assigned a “measure of size” that reflects the number, severity, and type of crashes in each jurisdiction. A sample of police jurisdictions was then selected from each sampled PSU, and those jurisdictions having a larger measure of size were oversampled. The third and final stage is the ongoing selection of the actual police accident reports that are filled out by a police officer at the scene of a motor vehicle crash. Each week, the sampled police jurisdictions are contacted and all police accident reports that have accumulated since the previous week are reviewed and classified into a stratum based on the types of vehicles involved, most severe police-reported injury, disposition of the injured, tow status, and model year of the vehicles. To be eligible for inclusion in NASS-CDS, a motor vehicle crash must (1) be police-reported, (2) involve a harmful event resulting from the crash (such as property damage or personal injury), and (3) involve at least one passenger car, light truck, van, or sport utility vehicle in transport on a traffic-way that was towed from the scene due to damage. The gross vehicle weight rating should be less than 10,000 pounds. Crashes are selected so that a larger percentage of higher severity crashes are selected than lower severity crashes, but every motor vehicle crash that occurs within one of the PSUs where NASS-CDS data are collected and that meets these conditions has a chance of being selected for investigation. NHTSA selected the first stage of the current NASS-CDS sample in 1988 and the second stage in 1995 but selects the police accident reports weekly so that the evidence from the motor vehicle crashes that might be investigated is still intact and the memory of the individuals involved is still fresh. NHTSA contracts with two companies that use small teams of crash technicians located across the country to collect NASS-CDS data. These teams typically include a team leader, one or two crash technicians, and an assistant, and each team reports to one of two contractor-led control centers, called zone centers. NHTSA’s crash technicians collect over 600 data elements during their investigations, including information on the damage vehicles sustained, the crash forces involved, injuries to victims, and factors that caused those injuries. Those investigations generally involve inspecting the scene of a crash and the vehicles involved; interviewing the drivers and occupants involved, if possible; reviewing official medical reports detailing any injuries sustained; and reconstructing what happened during the crash, as shown in figure 3. The crash technicians coordinate with law enforcement agencies, hospitals, tow yard operators, repair garages, and the drivers and occupants involved in the crashes while performing their work, and the information they collect is subject to review by NHTSA. That information, in turn, is then made available to NHTSA and the public for research purposes. The number of NASS-CDS investigations NHTSA conducts each year varies, as shown in figure 4. Between 1988 and 2013, NHTSA conducted an average of about 4,700 NASS-CDS investigations each year. However, since 2009, the number of NASS-CDS investigations conducted has steadily decreased, and in 2013, only about 3,400 NASS- CDS investigations were conducted. According to NHTSA, factors that have contributed to this decline include the budget for NASS-CDS and rising costs. For example, funding for NASS-CDS has been flat-lined since 2010, whereas costs—including costs for labor, information technology, leases and fuel—have risen. NHTSA’s effort to redesign NASS-CDS is part of NHTSA’s larger Data Modernization Project, begun in 2012, which also affects NASS-GES and FARS. Specific to NASS-CDS, NHTSA’s Data Modernization Project involves the following: redesigning the NASS-CDS sample by reviewing the data elements that comprise the sample and the statistical methodology behind selecting the sample; upgrading the equipment and information technology that supports NASS-CDS to reduce redundancy, improve data quality, and enhance the experience of NASS users; and implementing a new sample to replace NASS-CDS. We found that the process NHTSA followed to redesign NASS-CDS is consistent with applicable government-wide standards and guidelines issued by OMB that apply to the development of survey concepts, methods, and design. OMB’s standards and guidelines specify the professional principles and practices that federal agencies are required to adhere to and the level of quality and effort expected when initiating a new survey or redesigning an existing survey. In the case of redesigning NASS-CDS, the OMB standards and guidelines that apply include recommended practices for the development of survey concepts, methods and design. As such, they highlight the importance of consulting with potential users to identify their requirements and expectations, including design elements in a sample to meet stated objectives, and testing a survey’s components prior to full-scale implementation. OMB’s standards and guidelines are not intended to substitute for the extensive existing literature on statistical and survey theory, methods, and operations. Further, these standards and guidelines specify that agencies should engage knowledgeable and experienced survey practitioners to effectively achieve the goals of OMB’s standards. While the process NHTSA has followed is consistent with applicable OMB standards and guidelines, NHTSA has not yet started implementing its new sample design. Accordingly, we were not able to assess its implementation efforts. To redesign NASS-CDS, NHTSA awarded a contract to Westat in May 2012 to assist the agency in redesigning the NASS-CDS sample. Westat provides services relating to survey planning, design, development, and administration and analysis, and Westat researchers are known to be experts in the field of survey sampling. Westat’s tasks included reviewing the data elements NASS-CDS collects as well as the statistical methodology behind NASS-CDS. NHTSA, in conjunction with Westat, solicited comments from NASS users through the Federal Register and held a public listening session with NASS users—steps that are consistent with OMB’s recommended practice to consult with potential users to identify their requirements and expectations. Through the Federal Register, in June 2012, NHTSA solicited and subsequently received comments from 25 individuals and organizations regarding the redesign and their data needs. NHTSA also held a public listening session with NASS users in July 2013. During this listening session, NHTSA officials provided users with an update on its progress in redesigning NASS as well as an opportunity to provide additional comments, and eight NASS users provided comments at that listening session. According to NHTSA officials, the comments received indicated that users generally wanted NHTSA to increase the NASS-CDS sample size, collect additional data during NASS-CDS investigations, and improve the quality of the data collected. This includes collecting additional data from event data recorders and on the use of crash- avoidance technologies, as well as more detailed diagrams of the scenes of crashes. Consistent with OMB’s recommended practice to include design elements to meet stated objectives, NHTSA tasked Westat with (1) identifying data elements that are responsive to the current and future needs of both NHTSA and the public and (2) developing recommendations for a new sample design that met users’ data needs in an effective and efficient manner while still maintaining national representativeness. As part of its review, Westat reviewed the comments NASS users submitted and also assembled a team of experts in crash investigation, transportation safety research, and injury control to review NASS’s data elements, identify areas of research that should be better addressed in the future, and make recommendations. Both NHTSA and Westat considered the feasibility of suggestions users made to fundamentally change how NASS-CDS data are collected. For example, some users commented that police officers who fill out accident reports at crash scenes could do more to assist NHTSA’s data collection efforts, such as by photographing the crash scene and the vehicles involved at the time of the crash. However, according to NHTSA officials, such suggestions were deemed not to be practical because they would require resources from NHTSA to provide equipment and training to law enforcement officials to implement. NHTSA officials also noted that police officers on the scene might not be willing to cooperate with additional data collection duties when they are responding to a crash, and police jurisdictions have varying technological capacities to handle the storage and dissemination of photos or other additional data. Westat officials also told us they analyzed the NASS-CDS sample design to identify its limitations. This examination included reviewing the sample size, stratification and sampling allocation, and weighting procedures. Westat and NHTSA considered various alternative design options for the new sample design, and NHTSA chose a probability-based approach to meet its objective of maintaining national representativeness. To assess whether Westat used the appropriate statistical survey design principles and methodology to ensure that its objectives would be met, NHTSA had early drafts of Westat’s work reviewed by three independent consultants. We have previously reported that such reviews can improve the technical quality of a project and enhance the credibility of the decision-making process, and as a result of these independent reviews, NHTSA officials said they felt confident moving forward with Westat’s proposals for the new sample design. Thus, in May 2014, NHTSA announced that it planned to replace NASS-CDS with a new system called the Crash Investigation Sampling System (CISS)—which we discuss in detail in objective 2. Consistent with OMB’s recommended practice to test a survey’s components prior to full-scale implementation, NHTSA plans to implement the new CISS PSUs in phases. As of January 2015, NHTSA’s plans for the new sample call for initially implementing 24 new PSUs as a first phase and up to 73 PSUs in the future, if its budget allows. Prior to implementing all 24 new PSUs that comprise phase 1, NHTSA plans to first implement 5 of the PSUs, which, among other things, will allow NHTSA to test out the sample design and new equipment, such as electronic distance-measuring equipment that will support its data collection, among other things, prior to implementing the remaining PSUs. Figure 5 shows the location of the new CISS PSUs. To pay for the Data Modernization Project, the Congress provided NHTSA with $25 million in 2011 and another $3.5 million through its fiscal year 2014 appropriation. Of this available funding, NHTSA officials said they allocated $2,500,000 (9 percent) to redesign the NASS-CDS and NASS-GES samples; $16,500,000 (58 percent) for information- technology infrastructure upgrades and new equipment, such as electronic distance-measuring equipment; and $9,500,000 (33 percent) to implement the new samples. Because NHTSA has not yet started implementing the new samples or obligated all of the funding for new equipment, about $12 million of the $28.5 million provided was still available as of the time of our review. Table 2 shows the funding Congress provided and NHTSA’s reported obligations, as of December 1, 2014. However, as of the time of our review, NHTSA officials told us their time frames to begin implementing the new CISS PSUs were uncertain due to a government-wide cap on travel spending currently in place. Specifically, in 2012 OMB issued a memorandum, entitled Promoting Efficient Spending to Support Agency Operations, which directed agencies to spend at least 30 percent less on travel expenses than in fiscal year 2010 and to maintain that level of spending through fiscal year 2016.According to NHTSA, this cap on travel spending could delay its plans because implementing the new CISS PSUs requires that NHTSA staff travel to train the new crash technicians as well as to gain the cooperation of police jurisdictions, tow yards, and others. If no increases are provided, NHTSA stated that it would try to mitigate some of this limitation by, for example, training the new crash technicians at a local facility. NHTSA officials said they are currently working to obtain relief from this cap and hope to start implementing the new CISS PSUs beginning in 2015. According to NHTSA officials, failure to obtain relief from this cap could result in delays or additional costs in implementing the new PSUs. While we found NHTSA’s approach to redesigning NASS-CDS has been reasonable, we note that NHTSA was not timely in responding to Congress’ direction to provide information on the size of the NASS-CDS sample. Specifically, MAP-21 required NHTSA to conduct a comprehensive review of the data elements collected as part of NASS and report on whether there was a benefit to increasing the size of the NASS sample. For example, the act required NHTSA to provide Congress with information on the types of analyses that can be conducted and the conclusions that can be drawn under the current sample size and an expanded sample size, the number of investigations that NHTSA should conduct as part of the sample that would allow for optimal data analysis, NHTSA’s recommendations for improvements, and the resources necessary to implement NHTSA’s recommendations. The act also required that NHTSA obtain input from interested parties, including automobile manufacturers, safety advocates, the medical community, and research organizations. The act required NHTSA to report to Congress on the results of its review, including the benefits of a larger sample size, no later than October 1, 2013. NHTSA missed this deadline and issued its report in January 2015, as we were completing our review. In its report, NHTSA stated that meeting the needs of all NASS users is a challenge and that there is no precise answer to what the optimal sample size for NASS-CDS would be. However, NHTSA also noted that increasing the size of the NASS-CDS sample would help meet the evolving needs of NASS-CDS users. We agree with NHTSA that there is no precise answer to what the optimal sample size for NASS-CDS is, and discuss this in more detail in objective 2. One means of determining the extent the Data Modernization Project redesign will improve the NASS-CDS sample is to assess the potential for CISS to meet a main technical objective of the Data Modernization Project: achieving similar or greater levels of statistical precision for seven important crash and injury estimates. Four of these measures are for crash types—rear-end crashes, head-on crashes, angle crashes, and rollovers and three are for injury-severity—incapacitating, non- incapacitating, and fatal. The statistical precision of an estimate provides a measure of how close the estimate is expected to be to the population value it is attempting to describe. Improving the statistical precision allows for more accurate estimates and, in turn, informs the language NHTSA uses to make projections from the sample that apply to the whole population. Comparing the precision of the estimates NASS-CDS generates to the expected precision of the new CISS estimates is a method of determining whether the sample design has improved. The precision of a sample’s estimates can be increased by selecting a larger sample, using a more efficient sample design, or both. When a more efficient sample design is used, it is possible to generate estimates with similar or greater levels of precision with a smaller sample size. While NASS users indicated that they wanted to see an increase in the size of the sample as part of the redesign, NHTSA officials stated that expanding the sample size would increase the cost of collecting the data for an extensive data collection effort like NASS-CDS, beyond expected budgetary resources, which we will discuss in more detail later in this report. Decisions made in the process of designing a sample must balance available resources and the ability of the sample to meet the stated objectives within the defined precision requirements. NHTSA expects the new CISS design to achieve similar or greater levels of precision for NHTSA’s 7 key estimates by using a more efficient sample design, not by substantially increasing the sample size from the historical average for NASS-CDS. Westat developed several proposed sample designs and made design recommendations to NHTSA. NHTSA then modified Westat’s recommended design to produce similar or more precise results for the 7 key estimates using a sample of 24 PSUs, which would result in between 4,000 and 4,500 investigations annually. By way of comparison, in recent years, NASS-CDS has produced a sample size of about 3,500 investigations annually; however, between 1988 and 2013, has produced about 4,700 investigations annually. According to NHTSA, even though the expected sample size for CISS is comparable to the historical average for the NASS-CDS sample, the end result is that the new design that NHTSA is pursuing for CISS should be as precise if not more so than the current NASS-CDS design for the key estimates NHTSA indentified. Table 3 below summarizes the differences between NASS-CDS and CISS. There are many ways to design a sample to generate more statistically precise estimates. One way NHTSA improved the expected statistical precision of estimates from CISS was by selecting new PSUs that better represent the current population and the number and types of crashes nationwide. According to statistical literature, in a statistical sample that uses the same PSUs for a number of years such as NASS-CDS, PSUs should be re-selected periodically in order to ensure that the sample reflects the total population the sample is attempting to describe. However, the current PSUs for NASS-CDS were selected in 1988. Since NHTSA selected the current CDS PSU sample based on population and crash counts from more than 30 years ago, the CDS PSU sample has gradually become less representative of the population and crashes in the United States, and as a result, CDS estimates have become less statistically precise. By reselecting new PSUs, CISS data are expected to better represent the population and the areas in which the highest number of crashes with serious injuries occur, according to NHTSA. Consequently, the selection of PSUs is expected to allow for more precise estimation of crashes involving serious injuries. The improvement in the representativeness of the selected PSUs contributes to the improvement in statistical precision without increasing the sample size, making the new sample design more statistically efficient. Moreover, NHTSA expects the new sample design for CISS to contain more crashes with serious injuries and crashes involving newer vehicles than NASS-CDS currently contains, which also should make estimates of these crashes more statistically precise. For example, NHTSA designed CISS so that 10 percent of the police accident reports selected for CISS investigations will contain a newer vehicle and an incapacitating injury, up from 6.9 percent in NASS-CDS. The higher sampling rate for newer vehicles and serious injury crashes is expected to increase the number of these crashes selected. This step will improve the precision of estimates and address some users need to have more of these types of crashes in the sample. NASS users who provided NHTSA with comments about the NASS redesign indicated they wanted both of these changes. NHTSA’s determination of the sample size was dependent upon available resources, and NHTSA emphasized this in its January 2015 report to Congress. According to both NHTSA and Westat officials, budget constraints were the key factor driving both the new sample design and the decision not to increase the sample size. The budget for NASS-CDS has remained at $12,500,000 per year since 2010, and NHTSA officials also told us that the future budget for CISS remains uncertain. Because of the budget constraints, Westat recommended a design for the new sample with the fewest number of PSUs, police jurisdictions, and police accident reports that would meet NHTSA’s precision requirements and that NHTSA could realistically afford given its budget. In addition, there are limitations to how many investigations NHTSA’s crash technicians can conduct. Specifically, according to NHTSA, crash technicians can only currently conduct about 3 investigations every 2 weeks, to ensure that their investigations are high quality and thorough. The design that NHTSA is pursuing is expected to cost about $13.5 million annually. According to NHTSA officials, the expected cost for CISS is higher than the current NASS budget. NHTSA can afford to implement 24 PSUs at this time, because, according to NHTSA officials, the amount appropriated for Highway Safety Research and Operations in fiscal year 2014 included a $5 million increase supporting the operating budget for crash data collection that can be used to supplement the NASS-CDS budget. NHTSA officials said that because this funding was added to NHTSA’s base budget, they expect the funding will be available in future years. However, according to NHTSA, this funding would need to keep pace with inflation to help offset expected increases in operational costs. Another improvement as a result of the Data Modernization Project is the flexibility of the new sample design. Although NHTSA did not pursue a larger sample compared to historic levels due to budget constraints, the new sample design will allow NHTSA to add or subtract PSUs or police jurisdictions to increase or decrease the sample size in the future if its budget changes. Adding PSUs to increase the sample size is more efficient than simply adding more investigations within the selected 24 PSUs. According to statistical literature, greater increases in precision are achieved by increasing the number of PSUs rather than the number of investigations that are conducted per PSU. Additionally, according to NHTSA, because of limits in the number of crashes involving serious injuries or crashes involving newer vehicles within a particular geographic area, adding an additional PSU provides a new pool of crashes to sample from. NHTSA built this flexibility into the sample design to address the uncertainty of the future budget and to allow for sample size expansion if future budgets allow. To build this flexibility into the sample design, NHTSA identified 73 PSUs, which represents NHTSA’s estimation of its preferred sample size for CISS, that it can bring online one at a time as resources become available. However, adding PSUs to increase precision and increase the sample size is more expensive than adding more investigations within the selected 24 PSUs, as described above. The sample size of 24 PSUs can also be reduced if budgets are further constrained, but that could jeopardize the gains in statistical precision achieved with the new sample design. NHTSA developed projections to illustrate what size of a sample the agency could potentially implement given future CISS budgets. For example, according to NHTSA, if the budget for CISS was reduced to $11 million, even with a higher caseload than crash technicians currently conduct, the most investigations NHTSA could conduct annually is just under 2,600. If the budget for CISS was increased to $20 million, NHTSA could conduct over 5,000 investigations a year. The smaller sample for $11 million would be expected to produce less precise estimates than the larger sample for $20 million. Even though NHTSA expects its new design to meet its precision requirements for the seven key crash and injury estimates it identified, NHTSA officials said the design may not meet precision requirements for other estimates or include a sufficient number of specific crash-types that occur infrequently (rare crash populations). According to NHTSA, the optimal sample size for CISS is impossible to determine without first defining which estimates should meet precision requirements or which rare crash populations are required to meet other analytic needs. For example, NHTSA officials said that 73 PSUs selecting about 15,000 investigations annually would be a reasonable sample size not only for attempting to meet precision requirements for additional estimates but also for obtaining estimates for rare populations. A rare population can be crashes such as a side impact crash involving a child, which despite resulting in the death or injury of about 6,500 children under age 15, only accounted for about 0.1 percent of all crashes and 0.5 percent of serious injury crashes in 2011. From this small of a percentage of the total crash population, CISS as designed with 24 PSUs can be expected to select around 4 crashes per year for investigation. Increasing the sample size to 73 PSUs and 15,000 investigations could increase the number of selected serious injury side impact crashes involving an injured child to about 20 per year, according to NHTSA analysis. A sample of this size would allow NHTSA and external CISS users to better study these relatively rare crash populations and generalize their findings for these crashes to the population of all side impact crashes involving injured children. However, operating those 73 PSUs could cost at least three times the $13.5 million currently planned for CISS, or approximately $41 million annually, according to NHTSA officials. A smaller increase in sample size would also increase the number of selected side-impact crashes, but to a lesser extent. For example, according to NHTSA, a sample of about 7,500 investigations could increase the number of selected serious injury side-impact crashes involving an injured child to about 10 per year. According to NHTSA, this would require operating about 40 PSUs and cost about twice what is currently planned for CISS. This smaller increase would allow users to better study these populations but would require more time to accumulate enough cases to generalize their findings. However, NHTSA also noted that it does not like to combine more than 5 years of crash data to shed light on a problem that depends on the ever-changing crash environment, and that 5 years of data should produce between15 and 20 cases for even very rare crash populations (0.1 percent of serious injury crashes in a given year). Three users we interviewed estimated that a sample somewhere around 10,000 investigations per year would make the data considerably more useful to them. While the larger sample size would allow NHTSA to produce more estimates that are precise and more investigations for rare populations, it is not possible to quantify the benefit of this increase in precision. It is also not possible to determine the sample size that would result in the highest value for society in terms of reducing the human life and economic costs of motor vehicle crashes because the causal link between the data collected and the potential benefits, if any, is not possible to establish. NHTSA could implement a sample even larger than 73 PSUs if resources allowed, but the sample sizes required to produce estimates for certain sub-groups that some NASS users had requested, such as at the make and model level would be impractical. According to NHTSA officials, such an estimate would require a sample size that is both not possible to determine for all vehicle types and would exceed any reasonable expectation of resources available for CISS. There are many types of vehicles, some of which are more common (such as the Ford F-150 pick- up truck) than others (such as the Tesla Model S). For any make and model vehicle, only a small percentage are involved in a crash that would be eligible for inclusion in the CISS sample. Whether a sample is large enough to yield an adequate number of crash types involving a particular make and model vehicle depends in large part on the number of crashes involving that vehicle type. According to GAO analysis, similar to the above example of side impact crashes resulting in the death or injury of a child, in order to identify 20 side impact crashes resulting in death or injury involving a particular make and model vehicle, there would have to be about 6,500 side impact crashes resulting in death or injury nationwide involving that make and model vehicle in one year, which is highly unlikely. As the crash-type of interest approaches very small percentages of the total number of crashes, it becomes less probable that the sample will adequately capture these crashes. Since NHTSA only investigates several thousand crashes each year, the percentage chance of even one of these investigations being selected for a NASS-CDS investigation is very small. Officials noted that NHTSA’s Special Crash Investigations (SCI) program conducts investigations into issues that arise from specific agency special needs, and those investigations could include make- and model-level defects and other issues. NHTSA currently has three SCI teams that travel to investigate crashes according to agency priorities and recalls, separate from current NASS-CDS sampled investigations. In addition to the ability to scale the sample size up or down, the new sample design also allows NHTSA to substitute a PSU, police jurisdiction, or police accident reports, according to NHTSA officials. A PSU or police jurisdiction can be replaced if there are cooperation or information sharing challenges. During implementation, substituting a PSU or police jurisdiction would be less challenging than after the CISS is fully implemented. Substituting a PSU after implementation would require hiring and training new crash technicians. A police accident report can be replaced if it is incomplete or cannot be thoroughly researched, but replacing a police accident report was purposefully made difficult to avoid the potential for bias from crash technicians who choose to replace a police accident report for their own reasons. For example, if a police accident report included a vehicle that was impossible to locate for inspection, the technician could allow the sampling algorithm to select a replacement police accident report. Finally, the new design allows NHTSA to implement separate modules to study crashes involving large trucks, motorcyclists, bicyclists, and pedestrians—as NASS users had requested. Westat developed initial plans for each of these subsets as additional modules that could be conducted as separate studies utilizing the CISS sites. As part of the Data Modernization Project, NHTSA also plans to equip its crash technicians with new technology to help improve the efficiency and accuracy of the data they collect. Improving the accuracy of NASS-CDS data with more electronic data collection methods was one aspect of NASS that users indicated they hoped NHTSA would address as part of the redesign. For example, in the comments NHTSA received in response to its Federal Register notice, NASS users indicated they wanted scalable diagrams of crash scenes. Currently, crash technicians collect NASS- CDS data using paper forms and have to enter the roadway to manually measure a crash scene using measuring wheels and tape measures. Afterward, they have to manually enter their measurements into a computer program, which creates an electronic image of the crash scene, which in turn is made available to NASS users. However, according to NHTSA, those drawings are not ideal when attempting to conduct detailed research of a crash scene because the diagrams provided are not scalable. The new equipment NHTSA plans to provide its crash technicians include tablet computers, which will allow crash technicians to electronically collect and transmit data remotely from the field; new accident reconstruction software, which will automatically create scalable diagrams of crash scenes; and new electronic distance measuring equipment, which is expected to improve the accuracy and efficiency of scene and vehicle inspections while also allowing crash technicians to take their scene measurements safely from the roadside. Figure 6 shows NHTSA crash technicians collecting crash scene data using tape measurements in the street and using new technology from the side of the road. While NHTSA expects new equipment will help its crash technicians collect more accurate data, it does not expect the new equipment will considerably reduce the time it takes to conduct an investigation or allow its crash technicians to conduct more investigations. NHTSA officials said the new equipment they plan to provide should help reduce the time it currently takes to conduct scene and vehicle inspections. However, this represents only a portion of the time NHTSA’s crash technicians spend each week performing scene and vehicle inspections. Further, as part of the NASS redesign NHTSA is also increasing the amount of information that its crash technicians collect, which, in turn, will require more time to collect. This includes data on the use of crash-avoidance technologies in newer vehicles as well as additional data for older vehicles, as some users had requested. Thus, the new equipment will not substantially decrease the amount of time technicians spend overall collecting data for crash investigations or the cost of collecting this data. Collecting data this detailed is expensive and time consuming. For example, according to a study NHTSA conducted in 2012, an average NASS-CDS investigation takes about 25 hours to perform, and NHTSA’s crash technicians spend, on average, about 10 percent of that time inspecting crash scenes and about 25 percent of their time inspecting vehicles. In contrast, NHTSA’s crash technicians spend 13 percent of their time sampling police accident reports for investigations. Figure 7 below provides information on the percentage of hours per week on average NHTSA’s crash technicians spend performing various aspects of NASS-CDS investigations. Because NHTSA has not yet started collecting data for CISS, it is not possible to determine the number of hours per week crash technicians will spend on various aspects of the sampling and data collection for CISS crash investigations. NASS-CDS provides NHTSA and others with an important source of data to understand the real-world nature and consequences of motor-vehicle traffic crashes. In redesigning NASS-CDS, NHTSA has followed a process that is consistent with applicable government-wide standards and guidance for redesigning statistical surveys. NHTSA has also taken steps to improve upon the original design for NASS-CDS in developing CISS— the system that will replace NASS-CDS—such as by making the sample more precise as well as by making the sample design more flexible to adapt to future budgets. While the proposed sample size will be sufficient to meet NHTSA’s requirements for the program, NHTSA does not plan to substantially increase the sample size. By increasing the size of the CISS sample, NHTSA and others could likely do more to study motor-vehicle traffic crashes in an effort to save lives and reduce the economic costs of crashes. Sampling sufficient cases to conduct analyses of rare populations requires a significantly larger sample. However, NHTSA’s ability to increase the size of this sample is dependent on its available resources, and according to NHTSA, increasing the size of the sample to such an extent would require a budget several times its current size. The specific benefits of the larger sample are impossible to determine, leaving the Congress with less information than would be desirable to help determine the appropriate level of funding for this program. However, should the Congress decide that it would be appropriate to enable NHTSA and other users to conduct additional analyses of crashes that, while they may occur rarely, can still result in significant loss of life and economic cost, this report provides information on the potential for different sample sizes to meet that need. We provided a draft of this report to the Department of Transportation for review and comment. The Department of Transportation provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and the Acting Chairman of the National Transportation Safety Board. This report will also be available at no charge on the GAO website http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This report assesses (1) the process the National Highway Traffic Safety Administration (NHTSA) used to redesign the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) and (2) the potential for this redesign to improve the NASS-CDS sample. We limited our scope to assessing NHTSA’s redesign of the NASS-CDS component of the National Automotive Sampling System (NASS). As a result, this report does not discuss the National Automotive Sampling System General Estimates System (NASS-GES) or other NHTSA data collection programs. To assess the process NHTSA used to redesign NASS-CDS, we reviewed pertinent documents related to the NASS redesign and interviewed knowledgeable NHTSA officials from the National Center for Statistics and Analysis—a NHTSA component that oversees the agency’s data collection efforts, including NASS—and representatives of Westat, the contractor selected to redesign the NASS-CDS sample. We also interviewed 21 NASS users or other interested parties, including automobile manufacturers, suppliers, safety advocates, members of the medical community, and representatives from research organizations, to understand how they use NASS-CDS and the improvements they would like to see NHTSA make to NASS-CDS as part of the redesign. We selected these 21 NASS users by first contacting those that submitted comments to NHTSA on the redesign and then asking these initial contacts who else we should interview. Specific NASS users we interviewed or received comments from are listed in table 4. The results of our discussions with NASS users are not generalizable to all NASS users but provide insights into aspects of NASS-CDS that some users indicated they would like to see improved. In addition, we visited two of the geographic locations, called primary sampling units (PSU), where NHTSA collects NASS-CDS data, to observe NHTSA’s crash technicians conduct their work, and spoke with NHTSA crash technicians at two others. The PSUs we visited were Seattle, Washington, and King County, Washington; the PSUs we contacted were Allegheny County, Pennsylvania, and Muskegon County, Michigan. We selected these locations to ensure we included each type of PSU (i.e., urban, county, or group of counties) and to ensure that we included at least one PSU from each of the two contractors that NHTSA uses to implement the program. The results of our discussions with PSUs are not generalizable to all PSUs but provide insights into aspects of the work crash technicians do. We assessed NHTSA’s efforts to redesign NASS based on government- wide standards and guidelines issued by the Office of Management and Budget (OMB) that apply to the development and implementation of statistical surveys such as NASS. OMB’s standards and guidelines provide a framework for the development of survey concepts, methods, and design; collecting data; processing data; producing estimates; analyzing data; reviewing procedures; and disseminating the results. These OMB documents also specify the professional principles and practices that federal agencies should follow and the level of quality and effort expected when initiating a new survey or redesigning an existing survey such as NASS-CDS. Because NHTSA was in the process of redesigning NASS at the time of our review, we focused our assessment on reviewing NHTSA’s processes as they relate to the development of survey concepts, methods, and design. To assess the potential for the new sample design to improve NASS-CDS data, increase precision of estimates, and increase the sample size, a team that included GAO social science analysts with statistical survey expertise reviewed the sampling methodology for the current NASS-CDS sample and the design proposed for the new Crash Investigation Sampling System (CISS) sample. As a part of this review, we analyzed the proposed changes to the sample design, the number of PSUs chosen, the overall sample size recommended, and NHTSA’s budgetary constraints for the new sample. We compared the proposed redesign, including the sample selection process and sample size, with literature on efficient statistical sample design to assess the reasonableness of the redesign. We also assessed the extent to which NHTSA’s proposed design was responsive to user needs, according to what we learned from our NASS user interviews. Finally, we interviewed NHTSA and Westat officials on the new sample design. We conducted this performance audit from April 2014 through March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made important contributions to this report: Andrew Von Ah, Assistant Director; James Ashley; Lorraine Ettaro; David Hooper; Wesley A. Johnson; Sarah Jones; Joshua Ormond; and Amy Rosewarne.
In 2010, motor vehicle crashes in the United States cost almost 33,000 lives, injured 2.2 million people, and resulted in almost $900 billion in economic costs. As part of its mission to reduce these losses, NHTSA collects and analyzes data on motor vehicle crashes. One NHTSA program that collects crash data is NASS-CDS—a nationally representative sample of police-reported motor-vehicle traffic crashes; however, the NASS-CDS sample was designed in 1988, and subsequent shifts in the population and a declining sample size have necessitated an update of this sample. In 2012, NHTSA started taking steps to redesign NASS-CDS. Congress mandated GAO to review NHTSA's progress in redesigning NASS-CDS. This report assesses the (1) process NHTSA used to redesign NASS-CDS and (2) the potential for this redesign to improve the NASS-CDS sample. To conduct this review, GAO reviewed relevant information regarding the NASS-CDS redesign and interviewed officials from NHTSA and Westat, the contractor selected to assist NHTSA in redesigning NASS-CDS. Based on comments the public submitted to NHTSA in response to a notice in the Federal Register , GAO also interviewed 21 users of this data and other interested parties regarding the improvements they would like made to NASS-CDS. The Department of Transportation reviewed a draft of this report and provided technical comments, which were incorporated as appropriate. The National Highway Traffic Safety Administration (NHTSA) followed a reasonable process for redesigning the National Automotive Sampling System Crashworthiness Data System (NASS-CDS), which is a nationally representative sample of police-reported motor-vehicle traffic crashes. The Office of Management and Budget (OMB) has standards and guidelines that specify the professional principles and practices that agencies should follow and the level of quality and effort expected when redesigning an existing survey, such as NASS-CDS. NHTSA followed a process consistent with applicable OMB standards and guidelines. For example, NHTSA consulted with NASS-CDS users to identify their requirements and expectations in redesigning NASS-CDS and tasked the contractor, Westat, with developing proposals for a new sample design to meet users' data needs in an effective and efficient manner. As of January 2015, NHTSA planned to replace NASS-CDS with a new sample, called the Crash Investigation Sampling System (CISS). However, NHTSA did not meet a congressional deadline to report on the benefits of increasing the size of the NASS-CDS sample. Specifically, the Moving Ahead for Progress in the 21st Century Act required NHTSA to report, by October 1, 2013, on whether there would be a benefit to increasing the size of the NASS sample as well as to report on the resources necessary to implement NHTSA's recommended sample size, among other things. NHTSA issued its required report in January 2015 as GAO was completing its review. In its report, NHTSA noted that increasing the size of the NASS-CDS sample would help meet the evolving needs of NASS users, but stated there was no precise answer to what an optimal sample size for NASS-CDS would be. NHTSA expects the new sample it plans to implement as part of this redesign to generate greater statistical precision for key crash-type and injury-severity estimates than that of NASS-CDS using a similarly sized sample. One way NHTSA was able to generate more precise estimates was by selecting new sites at which to collect data. These sites, or “primary sampling units,” better represent the current population and distribution of motor vehicle crashes nationwide, representation that allows NHTSA and others to generate more precise estimates using the data. NHTSA also expects CISS to sample more crashes involving serious injuries and newer vehicles than NASS-CDS currently allows, as users had requested. NHTSA conducted about 4,700 NASS-CDS investigations annually between 1988 and 2013, and while there is no clear optimal sample size, a larger sample size could allow NHTSA to generate estimates that are even more precise or generate estimates for types of crashes that occur infrequently, estimates that could contribute to research that can affect vehicle safety. However, NHTSA's ability to increase the new CISS sample size is limited by its current and expected budget. Additional planned improvements to NASS-CDS include new technologies that allow for safer and more accurate measurements of accident scenes and vehicles involved in crashes. While NHTSA expects these new technologies to also result in some time savings, NHTSA does not expect them to allow for more investigations due to the time-intensive nature of the CISS data-collection effort.
The Army has taken a number of steps since June 2010 at different levels to provide for more effective management and oversight of contracts supporting Arlington, including improving visibility of contracts, establishing new support relationships, formalizing policies and procedures, and increasing the use of dedicated contracting staff to manage and improve acquisition processes. While significant progress has been made, we have recommended that the Army take further action in these areas to ensure continued improvement and institutionalize progress made to date. These recommendations and the agency’s response are discussed later in this statement. Arlington does not have its own contracting authority and, as such, relies on other contracting offices to award and manage contracts on its behalf. ANCP receives contracting support in one of two main ways, either by (1) working directly with contracting offices to define requirements, ensure the appropriate contract vehicle, and provide contract oversight, or (2) partnering with another program office to leverage expertise and get help with defining requirements and providing contract oversight. Those program offices, in turn, use other contracting arrangements to obtain services and perform work for Arlington. Using data from multiple sources, we identified 56 contracts and task orders that were active during fiscal year 2010 and the first three quarters of fiscal year 2011 under which these contracting offices obligated roughly $35.2 million on Arlington’s behalf. These contracts and task orders supported cemetery operations, such as landscaping, custodial, and guard services; construction and facility maintenance; and new efforts to enhance information-technology systems for the automation of burial operations. Figure 1 identifies the contracting relationships, along with the number of contracts and dollars obligated by contracting office, for the contracts and task orders we reviewed. At the time of our review, we found that ANCP did not maintain complete data on contracts supporting its operations. We have previously reported that the effective acquisition of services requires reliable data to enable informed management decisions.leadership may be without sufficient information to identify, track, and ensure the effective management and oversight of its contracts. While we obtained information on Arlington contracts from various sources, limitations associated with each of these sources make identifying and tracking Arlington’s contracts as a whole difficult. For example: Without complete data, ANCP Internal ANCP data. A contract specialist detailed to ANCP in September 2010 developed and maintained a spreadsheet to identify and track data for specific contracts covering daily cemetery operations and maintenance services. Likewise, ANCP resource management staff maintain a separate spreadsheet that tracks purchase requests and some associated contracts, as well as the amount of funding provided to other organizations through the use of military interdepartmental purchase requests. Neither of these spreadsheets identifies the specific contracts and obligations associated with Arlington’s current information-technology and construction requirements. Existing contract and financial systems. The Federal Procurement Data System-Next Generation (FPDS-NG) is the primary system used to track governmentwide contract data, including those for the Department of Defense (DOD) and the Army. The Arlington funding office identification number, a unique code that is intended to identify transactions specific to Arlington, is not consistently used in this system and, in fact, was used for only 34 of the 56 contracts in our review. In October 2010 and consistent with a broader Army initiative, ANCP implemented the General Fund Enterprise Business System (GFEBS) to enhance financial management and oversight and to improve its capability to track expenditures. We found that data in this system did not identify the specific information-technology contracts supported by the Army Communications-Electronics Command, Army Geospatial Center, Naval Supply Systems Command Weapon Systems Support office, and others. Officials at ANCP and at the MICC-Fort Belvoir stated that they were exploring the use of additional data resources to assist in tracking Arlington contracts, including the Virtual Contracting Enterprise, an electronic tool intended to help enable visibility and analysis of elements of the contracting process. Contracting support organizations. We also found that Army contracting offices had difficulty in readily providing complete and accurate data to us on Arlington contracts. For example, the National Capital Region Contracting Center could not provide a complete list of active contracts supporting Arlington during fiscal years 2010 and 2011 and in some cases did not provide accurate dollar amounts associated with the contracts it identified. USACE also had difficulty providing a complete list of active Arlington contracts for this time frame. The MICC-Fort Belvoir contracting office was able to provide a complete list of the recently awarded contracts supporting Arlington with accurate dollar amounts for this time frame, and those data were supported by similar information from Arlington. The Army has also taken a number of steps to better align ANCP contract support with the expertise of its partners. However, some of the agreements governing these relationships do not yet fully define roles and responsibilities for contracting support. We have previously reported that a key factor in improving DOD’s service acquisition outcomes—that is, obtaining the right service, at the right price, in the right manner—is having defined responsibilities and associated support structures. Going forward, sustained attention on the part of ANCP and its partners will be important to ensure that contracts of all types and risk levels are managed effectively. The following summarizes ongoing efforts in this area: ANCP established a new contracting support agreement with the Army Contracting Command in August 2010. The agreement states that the command will assign appropriate contracting offices to provide support, in coordination with ANCP, and will conduct joint periodic reviews of new and ongoing contract requirements. In April 2011, ANCP also signed a separate agreement with the MICC, part of the Army Contracting Command, which outlines additional responsibilities for providing contracting support to ANCP. While this agreement states that the MICC, through the Fort Belvoir contracting office, will provide the full range of contracting support, it does not specify the types of requirements that will be supported, nor does it specify that other offices within the command may also do so. ANCP signed an updated support agreement with USACE in December 2010, which states that these organizations will coordinate to assign appropriate offices to provide contracting support and that USACE will provide periodic joint reviews of ongoing and upcoming requirements. At the time of our review, USACE officials noted that they were in the process of finalizing an overarching program management plan with ANCP, which, if implemented, provides additional detail about the structure of and roles and responsibilities for support. USACE and ANCP have also established a Senior Executive Review Group, which updates the senior leadership at both organizations on the status of ongoing efforts. ANCP has also put agreements in place with the Army Information Technology Agency (ITA) and the Army Analytics Group, which provide program support for managing information-technology infrastructure and enhance operational capabilities. Officials at ANCP decided to leverage this existing Army expertise, rather than attempting to develop such capabilities independently as was the case under the previous Arlington management. For example, the agreement in place with ITA identifies the services that will be provided to Arlington, performance metrics against which ITA will be measured, as well as Arlington’s responsibilities. These organizations are also responsible for managing the use of contracts in support of their efforts; however, the agreement with ANCP does not specifically address roles and responsibilities associated with the use and management of these contracts supporting Arlington requirements. Although officials from these organizations told us that they currently understand their responsibilities, without being clearly defined in the existing agreements, roles and responsibilities may be less clear in the future when personnel change. ANCP has developed new internal policies and procedures and improved training for staff serving as contracting officer’s representatives, and has dedicated additional staff resources to improve contract management. Many of these efforts were in process at the time of our review, including decisions on contracting staff needs, and their success will depend on continued management attention. The following summarizes our findings in this area: Arlington has taken several steps to more formally define its own internal policies and procedures for contract management. In July 2010, the Executive Director of ANCP issued guidance stating that the Army Contracting Command and USACE are the only authorized contracting centers for Arlington. Further, ANCP is continuing efforts to (1) develop standard operating procedures associated with purchase requests; (2) develop memorandums for all ANCP employees that outline principles of the procurement process, as well as training requirements for contracting officer’s representatives; and (3) create a common location for reference materials and information associated with Arlington contracts. In May 2011, the Executive Director issued guidance requiring contracting officer’s representative training for all personnel assigned to perform that role, and at the time of our review, all of the individuals serving as contracting officer’s representatives had received training for that position. ANCP, in coordination with the MICC-Fort Belvoir contracting office is evaluating staffing requirements to determine the appropriate number, skill level, and location of contracting personnel. In July 2010, the Army completed a study that assessed Arlington’s manpower requirements and identified the need for three full-time contract specialist positions. While these positions have not been filled to date, ANCP’s needs have instead been met through the use of staff provided by the MICC. At the time of our review, the MICC-Fort Belvoir was providing a total of 10 contracting staff positions in support of Arlington, 5 of which are funded by ANCP, with the other 5 funded by the MICC-Fort Belvoir to help ensure adequate support for Arlington requirements. ANCP officials have identified the need for a more senior contracting specialist and stated that they intend to request an update to their staffing allowance for fiscal year 2013 to fill this new position. Prior reviews of Arlington have identified numerous issues with contracts in place prior to the new leadership at ANCP. While our review of similar contracts found common concerns, we also found that contracts and task orders awarded since June 2010 reflect improvements in acquisition practices. Our previous contracting-related work has identified the need to have well-defined requirements, sound business arrangements (i.e., contracts in place), and the right oversight mechanisms to ensure positive outcomes. We found examples of improved documentation, better definition and consolidation of existing requirements for services supporting daily cemetery operations, and more specific requirements for contractor performance. At the time of our review, many of these efforts were still under way, so while initial steps taken reflect improvement, their ultimate success is not yet certain. The Army has also taken positive steps and implemented improvements to address other management deficiencies and to provide information and assistance to families. It has implemented improvements across a broad range of areas at Arlington, including developing procedures for ensuring accountability over remains, taking actions to better provide information- assurance, and improving its capability to respond to the public and to families’ inquiries. For example, Arlington officials have updated and documented the cemetery’s chain-of-custody procedures for remains, to include multiple verification steps by staff members and the tracking of decedent information through a daily schedule, electronic databases, and tags affixed to urns and caskets entering Arlington. Nevertheless, we identified several areas where challenges remain: Managing information-technology investments. Since June 2010, ANCP has invested in information-technology improvements to correct existing problems at Arlington and has begun projects to further enhance the cemetery’s information-technology capabilities. However, these investments and planned improvements are not yet guided by an enterprise architecture—or modernization blueprint. Our experience has shown that developing this type of architecture can help minimize risk of developing systems that are duplicative, poorly integrated, and unnecessarily costly to maintain. ANCP is working to develop an enterprise architecture, and officials told us in January that they expect the architecture will be finalized in September 2012. Until the architecture is in place and ANCP’s ongoing and planned information-technology investments are assessed against that architecture, ANCP lacks assurance that these investments will be aligned with its future operational environment, increasing the risk that modernization efforts will not adequately meet the organization’s needs. Updating workforce plans. The Army took a number of positive steps to address deficiencies in its workforce plans, including completing an initial assessment of its organizational structure in July 2010 after the Army IG found that Arlington was significantly understaffed. However, ANCP’s staffing requirements and business processes have continued to evolve, and these changes have made that initial workforce assessment outdated. Since the July 2010 assessment, officials have identified the need for a number of new positions, including positions in ANCP’s public-affairs office and a new security and emergency-response group. Additionally, Arlington has revised a number of its business processes, which could result in a change in staffing needs. Although ANCP has adjusted its staffing levels to address emerging requirements, its staffing needs have not been formally reassessed. Our prior work has demonstrated that this kind of assessment can improve workforce planning, which can enable an organization to remain aware of and be prepared for its current and future needs as an organization. ANCP officials have periodically updated Arlington’s organizational structure as they identify new requirements, and officials told us in January that they plan to completely reassess staffing within ANCP in the summer of 2012 to ensure that it has the staff needed to achieve its goals and objectives. Until this reassessment is completed and documented, ANCP lacks assurance that it has the correct number and types of staff needed to achieve its goals and objectives. Developing an organizational assessment program. Since 2009 ANCP has been the subject of a number of audits and assessments by external organizations that have reviewed many aspects of its management and operations, but it has not yet developed its own assessment program for evaluating and improving cemetery performance on a continuous basis. Both the Army IG and VA have noted the importance of assessment programs in identifying and enabling improvements of cemetery operations to ensure that cemetery standards are met. Further, the Army has emphasized the importance of maintaining an inspection program that includes a management tool to identify, prevent, or eliminate problem areas. At the time of our review, ANCP officials told us they were in the process of developing an assessment program and were adapting VA’s program to meet the needs of the Army’s national cemeteries. ANCP officials estimated in January that they will be ready to perform their first self-assessment in late 2012. Until ANCP institutes an assessment program that includes an ability to complete a self- assessment of operations and an external assessment by cemetery subject-matter experts, it is limited in its ability to evaluate and improve aspects of cemetery performance. Coordinating with key partners. While ANCP has improved its coordination with other Army organizations, we found that it has encountered challenges in coordinating with key operational partners, such as the Military District of Washington, the military service honor guards, and Joint Base Myer-Henderson Hall. Officials from these organizations told us that communication and collaboration with Arlington have improved, but they have encountered challenges and there are opportunities for continued improvement. For example, officials from the Military District of Washington and the military service honor guards indicated that at times they have experienced difficulties working with Arlington’s Interment Scheduling Branch and provided records showing that from June 24, 2010, through December 15, 2010, there were at least 27 instances where scheduling conflicts took place. These challenges are due in part to a lack of written agreements that fully define how these operational partners will support and interact with Arlington. Our prior work has found that agencies can derive benefits from enhancing and sustaining their collaborative efforts by institutionalizing these efforts with agreements that define common outcomes, establish agreed-upon roles and responsibilities, identify mechanisms used to monitor and evaluate collaborative efforts, and enable the organizations to leverage their resources. ANCP has a written agreement in place with Joint Base Myer-Henderson Hall, but this agreement does not address the full scope of how these organizations work together. Additionally, ANCP has drafted, but has not yet signed, a memorandum of agreement with the Military District of Washington. ANCP has not drafted memorandums of agreement with the military service honor guards despite each military service honor guard having its own scheduling procedure that it implements directly with Arlington and each service working with Arlington to address operational challenges. ANCP, by developing memorandums of agreement with its key operational partners, will be better positioned to ensure effective collaboration with these organizations and help to minimize future communication and coordination challenges. Developing a strategic plan. Although ANCP officials have been taking steps to address challenges at Arlington, at the time of our review they had not adopted a strategic plan aimed at achieving the cemetery’s longer-term goals. An effective strategic plan can help managers to prioritize goals; identify actions, milestones, and resource requirements for achieving those goals; and establish measures for assessing progress and outcomes. Our prior work has shown that leading organizations prepare strategic plans that define a clear mission statement, a set of outcome-related goals, and a description of how the organization intends to achieve those goals. Without a strategic plan, ANCP is not well positioned to ensure that cemetery improvements are in line with the organizational mission and achieve desired outcomes. ANCP officials told us during our review that they were at a point where the immediate crisis at the cemetery had subsided and they could focus their efforts on implementing their longer-term goals and priorities. In January, ANCP officials showed us a newly developed campaign plan. While we have not evaluated this plan, our preliminary review found that it contains elements of an effective strategic plan, including expected outcomes and objectives for the cemetery and related performance metrics and milestones. Developing written guidance for providing assistance to families. After the Army IG issued its findings in June 2010, numerous families called Arlington to verify the burial locations of their loved ones. ANCP developed a protocol for investigating these cases and responding to the families. Our review found that ANCP implemented this protocol, and we reviewed file documentation for a sample of these cases. In reviewing the assistance provided by ANCP when a burial error occurred, we found that ANCP’s Executive Director or Chief of Staff contacted the affected families. ANCP’s Executive Director—in consultation with cemetery officials and affected families—made decisions on a case-by-case basis about the assistance that was provided to each family. For instance, some families who lived outside of the Washington, D.C., area were reimbursed for hotel and travel costs. However, the factors that were considered when making these decisions were not documented in a written policy. In its June 2010 report, the Army IG noted in general that the absence of written policies left Arlington at risk of developing knowledge gaps as employees leave the cemetery. By developing written guidance that addresses the cemetery’s interactions with families affected by burial errors, ANCP could identify pertinent DOD and Army regulations and other guidance that should be considered when making such decisions. Also, with written guidance the program staff could identify the types of assistance that can be provided to families. In January, ANCP provided us with a revised protocol for both agency-identified and family member-initiated gravesite inquiries. The revised protocol provides guidance on the cemetery’s interactions with the next of kin and emphasizes the importance of maintaining transparency and open communication with affected families. A transfer of jurisdiction for the Army’s two national cemeteries to VA is feasible based on historical precedent for the national cemeteries and examples of other reorganization efforts in the federal government. However, we identified several factors that may affect the advisability of making such a change, including the potential costs and benefits, potential transition challenges, and the potential effect on Arlington’s unique characteristics. In addition, given that the Army has taken steps to address deficiencies at Arlington and has improved its management, it may be premature to move forward with a change in jurisdiction, particularly if other options for improvement exist that entail less disruption. During our review, we identified opportunities for enhancing collaboration between the Army and VA that could leverage their strengths and potentially lead to improvements at all national cemeteries. Transferring cemetery jurisdiction could have both benefits and costs. Our prior work suggests that government reorganization can provide an opportunity for greater effectiveness in program management and result in improved efficiency over the long-term, and can also result in short- term operational costs.told us they were not aware of relevant studies that may provide insight into the potential benefits and costs of making a change in cemetery jurisdiction. However, our review identified areas where VA’s and the Army’s national cemeteries have similar, but not identical, needs and have developed independent capabilities to meet those needs. For example, each agency has its own staff, processes, and systems for determining burial eligibility and scheduling and managing burials. While consolidating these capabilities may result in long-term efficiencies, there could also be challenges and short-term costs. At the time of our review, Army and VA officials Potential transition challenges may arise in transferring cemetery jurisdiction. Army and VA cemeteries have similar operational requirements to provide burial services for service members, veterans, and veterans’ family members; however, officials identified areas where the organizations differ and stated that there could be transition challenges if VA were to manage Arlington, including challenges pertaining to the regulatory framework, appropriations structure, and contracts. For example, Arlington has more restrictive eligibility criteria for in-ground burials, which has the result of limiting the number of individuals eligible for burial at the cemetery. If Arlington cemetery were to be subject to the same eligibility criteria as VA’s cemeteries, the eligibility for in-ground burials at Arlington would be greatly expanded. Additionally, the Army’s national cemeteries are funded through a different appropriations structure than VA’s national cemeteries. If the Army’s national cemeteries were transferred to VA, Congress would have to choose whether to alter the funding structure currently in place for Arlington. Burial eligibility at VA’s national cemeteries is governed by 38 U.S.C. § 2402 and 38 C.F.R. § 38.620. Burial eligibility at Arlington is governed by 38 U.S.C. § 2410 and 32 C.F.R. § 553.15. Mission and vision statements. The Army and VA have developed their own mission and vision statements for their national cemeteries that differ in several ways. Specifically, VA seeks to be a model of excellence for burials and memorials, while Arlington seeks to be the nation’s premier military cemetery. Military honors provided to veterans. The Army and VA have varying approaches to providing military funeral honors. VA is not responsible for providing honors to veterans, and VA cemeteries generally are not involved in helping families obtain military honors from DOD. In contrast, Arlington provides a range of burial honors depending on whether an individual is a service member killed in action, a veteran, or an officer. Ceremonies and special events. Arlington hosts a large number of ceremonies and special events in a given year, some of which may involve the President of the United States as well as visiting heads of state. From June 10, 2010, through October 1, 2011, Arlington hosted more than 3,200 wreath-laying ceremonies, over 70 memorial ceremonies, and 19 state visits, in addition to Veterans Day and Memorial Day ceremonies, and also special honors for Corporal Frank Buckles, the last American servicemember from World War I. VA officials told us that their cemeteries do not support a similar volume of ceremonies, and as a result they have less experience in this area than the Army. During our review, we found that there are opportunities to expand collaboration between the Army and VA that could improve the efficiency and effectiveness of these organizations’ cemetery operations. Our prior work has shown that achieving results for the nation increasingly requires that federal agencies work together, and when considering the nation’s long-range fiscal challenges, the federal government must identify ways to deliver results more efficiently and in a way that is consistent with its limited resources. Since the Army IG issued its findings in June 2010, the Army and VA have taken steps to partner more effectively. The Army’s hiring of several senior VA employees to help manage Arlington has helped to foster collaboration, and the two agencies signed a memorandum of understanding that allows ANCP employees to attend classes at VA’s National Training Center. However, the Army and VA may have opportunities to collaborate and avoid duplication in other areas that could benefit the operations of either or both cemetery organizations. For example, the Army and VA are upgrading or redesigning some of their core information-technology systems supporting cemetery operations. By continuing to collaborate in this area, the agencies can better ensure that their information-technology systems are able to communicate, thereby helping to prevent operational challenges stemming from a lack of compatibility between these systems in the future. In addition, each agency may have specialized capabilities that it could share with the other. VA, for example, has staff dedicated to determining burial eligibility, and the Army has an agency that provides geographic-information-system and global-positioning-system capabilities—technologies that VA officials said that they are examining for use at VA’s national cemeteries. While the Army and VA have taken steps to improve collaboration, at the time of our review the agencies had not established a formal mechanism to identify and analyze issues of shared interest, such as process improvements, lessons learned, areas for reducing duplication, and solutions to common problems. VA officials indicated that they planned to meet with ANCP officials in the second quarter of fiscal year 2012, with the aim of enhancing collaboration between the two agencies. Unless the Army and VA collaborate to identify areas where the agencies can assist each other, they could miss opportunities to take advantage of each other’s strengths—thereby missing chances to improve the efficiency and effectiveness of cemetery operations—and are at risk of investing in duplicative capabilities. The success of the Army’s efforts to improve contracting and management at Arlington will depend on continued focus in various areas. Accordingly, we made a number of recommendations in our December 2011 reports. In the area of contracting, we recommended that the Army implement a method to track complete and accurate contract data, ensure that support agreements clearly identify roles and responsibilities for contracting, and determine the number and skills necessary for contracting staff. In its written comments, DOD partially concurred with these recommendations, agreeing that there is a need to take actions to address the issues we raised, but indicating that our recommendations did not adequately capture Army efforts currently underway. We believe our report reflects the significant progress made by Arlington and that implementation of our recommendations will help to institutionalize the positive steps taken to date. With regard to our recommendation to identify and implement a method to track complete and accurate contact data, DOD noted that Arlington intends to implement, by April 2012, a methodology based on an electronic tool which is expected to collect and reconcile information from a number of existing data systems. Should this methodology consider the shortcomings within these data systems as identified in our report, we believe this would satisfy our recommendations. DOD noted planned actions, expected for completion by March 2012 that, if implemented, would satisfy the intent of our other two recommendations. With regard to other management challenges at Arlington, we recommended that the Army implement its enterprise architecture and reassess ongoing and planned information-technology investments; update its assessment of ANCP’s workforce needs; develop and implement a program for assessing and improving cemetery operations; develop memorandums of understanding with Arlington’s key operational partners; develop a strategic plan; and develop written guidance to help determine the types of assistance that will be provided to families affected by burial errors. DOD fully agreed with our recommendations that the Army update its assessment of ANCP’s workforce needs and implement a program for assessing and improving cemetery operations. DOD partially agreed with our other recommendations. In January, ANCP officials provided us with updates on its plans to take corrective actions, as discussed in this statement. With regard to implementing an enterprise architecture, DOD stated that investments made to date in information technology have been modest and necessary to address critical deficiencies. We recognize that some vulnerabilities must be expeditiously addressed. Nevertheless, our prior work shows that organizations increase the risk that their information-technology investments will not align with their future operational environment if these investments are not guided by an approved enterprise architecture. Regarding its work with key operational partners, DOD stated that it recognizes the value of establishing memorandums of agreement and noted the progress that the Army has made in developing memorandums of agreement with some of its operational partners. We believe that the Army should continue to pursue and finalize agreements with key operational partners that cover the full range of areas where these organizations must work effectively together. With regard to a strategic plan, DOD stated that it was in the process of developing such a plan. As discussed previously, ANCP officials in January showed us a newly developed campaign plan that, based on our preliminary review, contains elements of an effective strategic plan. Regarding written guidance on the factors that the Executive Director will consider when determining the types of assistance provided to families affected by burial errors, DOD stated that such guidance would limit the Executive Director’s ability to exercise leadership and judgment to make an appropriate determination. We disagree with this view. Our recommendation does not limit the Executive Director’s discretion, which we consider to be an essential part of ensuring that families receive the assistance they require in these difficult situations. Our recommendation, if implemented, would improve visibility into the factors that guide decision making in these cases. Finally, we recommended that the Army and VA implement a joint working group or other such mechanism to enable ANCP and VA’s National Cemetery Administration to collaborate more closely in the future. Both DOD and VA concurred with this recommendation. As noted, VA stated that a planning meeting to enhance collaboration is planned for the second quarter of 2012. Chairmen Wilson and Wittman, Ranking Members Davis and Cooper, and Members of the Subcommittees, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Belva Martin, Director, Acquisition and Sourcing Management, on (202) 512-4841 or martinb@gao.gov or Brian Lepore, Director, Defense Capabilities and Management, on (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Brian Mullins, Assistant Director; Tom Gosling, Assistant Director; Kyler Arnold; Russell Bryan; George M. Duncan; Kathryn Edelman; Julie Hadley; Kristine Hassinger; Lina Khan; and Alex Winograd. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Arlington National Cemetery (Arlington) is the final resting place for many of our nation’s military servicemembers, their family members, and others. In June 2010, the Army Inspector General identified problems at the cemetery, including deficiencies in contracting and management, burial errors, and a failure to notify next of kin of errors. In response, the Secretary of the Army issued guidance creating the position of the Executive Director of the Army National Cemeteries Program (ANCP) to manage Arlington and requiring changes to address the deficiencies and improve cemetery operations. In response to Public Law 111-339, GAO assessed several areas, including (1) actions taken to improve contract management and oversight, (2) the Army’s efforts to address identified management deficiencies and provide information and assistance to families regarding efforts to detect and correct burial errors, and (3) factors affecting the feasibility and advisability of transferring jurisdiction for the Army’s national cemeteries to the Department of Veterans Affairs (VA). The information in this testimony summarizes GAO’s recent reports on Arlington contracting (GAO-12-99) and management (GAO-12-105). These reports are based on, among other things, analyzing guidance, policies, plans, contract files, and other documentation from the Army, Arlington, and other organizations and interviews with Army and VA officials. GAO identified 56 contracts and task orders that were active during fiscal year 2010 and the first three quarters of fiscal year 2011 under which contracting offices obligated roughly $35.2 million on Arlington’s behalf. These contracts supported cemetery operations, construction and facility maintenance, and new efforts to enhance information-technology systems for the automation of burial operations. The Army has taken a number of steps since June 2010 at different levels to provide for more effective management and oversight of contracts, establishing new support relationships, formalizing policies and procedures, and increasing the use of dedicated contracting staff to manage and improve its acquisition processes. However, GAO found that ANCP does not maintain complete data on its contracts, responsibilities for contracting support are not yet fully defined, and dedicated contract staffing arrangements still need to be determined. The success of Arlington’s acquisition outcomes will depend on continued management focus from ANCP and its contracting partners to ensure sustained attention to contract management and institutionalize progress made to date. GAO made three recommendations to continue improvements in contract management. The Department of Defense (DOD) partially concurred and noted actions in progress to address these areas. The Army has taken positive steps and implemented improvements to address other management deficiencies and to provide information and assistance to families. It has implemented improvements across a broad range of areas at Arlington, including developing procedures for ensuring accountability over remains and improving its capability to respond to the public and to families’ inquiries. Nevertheless, the Army has remaining management challenges in several areas—managing information-technology investments, updating workforce plans, developing an organizational assessment program, coordinating with key partners, developing a strategic plan, and developing guidance for providing assistance to families. GAO made six recommendations to help address these areas. DOD concurred or partially concurred and has begun to take some corrective actions. A transfer of jurisdiction for the Army’s two national cemeteries to VA is feasible based on historical precedent for the national cemeteries and examples of other reorganization efforts in the federal government. However, several factors may affect the advisability of making such a change, including the potential costs and benefits, potential transition challenges, and the potential effect on Arlington’s unique characteristics. In addition, given that the Army has taken steps to address deficiencies at Arlington and has improved its management, it may be premature to move forward with a change in jurisdiction, particularly if other options for improvement exist that entail less disruption. GAO identified opportunities for enhancing collaboration between the Army and VA that could leverage their strengths and potentially lead to improvements at all national cemeteries. GAO recommended that the Army and VA develop a mechanism to formalize collaboration between these organizations. DOD and VA concurred with this recommendation. In the reports, GAO made several recommendations to help Arlington sustain progress made to date.
Manufactured homes differ from site-built homes based on how they are constructed, classified, financed, and appraised, with many differences resulting from the home’s status as either real or personal property. Manufactured home parks have a variety of ownership models, ranging from sole to corporate ownership and including cooperative and nonprofit ownership as well. FHA’s Title I program dates to 1969, where it has served primarily low-income individuals and the majority of the lending has been geographically concentrated. The National Manufactured Housing Construction and Safety Standards Act of 1974 set a national building code for the construction of manufactured homes, known as the HUD Code, which became effective on June 15, 1976. For the purposes of this report, we define manufactured homes as factory-built housing units designed to meet the HUD Code. Manufactured homes can be single-wide, double-wide, or multi-wide (see fig. 1). The federal standards regulate manufactured housing design and construction, strength and durability, transportability, fire safety, and energy efficiency. Units constructed and completed prior to June 15, 1976, are not considered HUD-approved and generally are considered mobile homes. Every home built to the HUD Code is identified with a red metal tag, known as the HUD certification label. This distinguishes manufactured homes from modular homes. Both types of homes are factory-built, but modular home “modules” are then assembled on a site. And, unlike manufactured homes that are federally regulated under a national building code, modular homes must meet the state, local, or regional building codes where the home is to be sited. Finally, site-built housing is constructed on a lot and must meet local building codes (see table 1). Unlike site-built homes, which are titled as real property and usually financed through a mortgage, a manufactured home may be financed as personal property or as real property. When a homebuyer purchases a manufactured home without tying the purchase to land and does not title the home as real property, the home is generally considered personal property, or chattel, which denotes property that is movable and personal, such as an automobile or furniture. Private sources—such as national consumer-finance companies and manufactured home lending specialists who work directly with manufactured home dealers and also through FHA Title I approved lenders—provide home-only or personal property financing, which is more akin to a consumer loan such as an automobile loan than a mortgage. Typically, these loans have higher interest rates than mortgages due to factors such as quick credit approval and their availability to those with marginal credit histories. To begin the process, a customer submits a credit application to the manufactured home lending specialist, who may or may not be affiliated with the dealership. The credit application also may be sent to a local bank. The lender reviews the applicant’s credit and makes a decision on whether to approve a loan. Manufactured homes not considered real property do not undergo market- based appraisals. Instead, they undergo a loan-to-invoice appraisal, where the manufacturer’s certified invoice, in effect, substitutes for an appraisal. In contrast, when a manufactured home is attached to the underlying land by a permanent foundation and the home and the land are treated as a single real estate package under state law, the home is generally considered real property and borrowers can obtain conventional real estate mortgages, which include conventional and government-assisted mortgage financing obtained through traditional mortgage lenders. Home and land financing for manufactured homes is similar to conventional mortgage lending for site-built housing. Manufactured homes that are financed using a conventional real estate mortgage undergo an appraisal that factors the location into the appraised value and also includes comparable prices of manufactured homes. Manufactured homes can be placed on either private property where the homeowner typically owns the land or in a manufactured home park. In a manufactured home park, also known as a mobile home park or a land- lease community, owners of manufactured homes pay rent for the land underneath the homes in addition to the loan payments they make for the units (the homes). The park owner typically provides sewer, water, electrical systems, landscaping, and maintains the roads and other common areas. Manufactured home parks have a variety of ownership models. Investors, ranging in size from small family operations to large conglomerates that own several properties across the country, own most of the manufactured home parks. Tenants of these parks may or may not have a lease and have no control over rent increases. According to officials we interviewed, in states such as Florida, California, and New Hampshire, resident-owned communities are more prevalent; that is, park tenants collectively purchased their community by forming either a for-profit or nonprofit cooperative corporation. Cooperative ownership allows residents to control the land by buying memberships or shares in the corporation and have more control over membership dues increases. Another ownership model involves a land trust, typically run through a nonprofit organization, in which the nonprofit owns the land and ensures against the possibility of sale or foreclosure of the land. FHA first insured loans for manufactured housing in 1969, under a program that came to be known as the Title I Manufactured Home Loan Program. The program was created to reduce the risk to lenders through insurance or a guarantee and encourage lenders to finance manufactured homes, which had traditionally been financed as personal property through comparatively high-interest, short-term consumer installment loans. Under Title I, FHA can guarantee loans for manufactured homes, for manufactured homes and the property on which they are located, or for the purchase of a manufactured home lot. FHA insures Title I manufactured home loans under the General Insurance Fund, which is supported by lenders’ insurance premiums (currently an annual premium of 1 percent, based on the initial loan amount). Since 1998, three lenders have originated the majority of Title I loans. Almost all of Title I loans are for the manufactured home-only loans rather than for home-and-land or land-only loans. In 2005, FHA Title I Manufactured Home lending accounted for only 2.8 percent of the personal property loan market; conventional lending accounted for the remainder. According to data from FHA, from 2004 to mid-2007, 66 percent of FHA Title I borrowers were 34 years or younger compared with 2.7 percent who were 65 years or older. From 2004 to mid-2007, the majority (73 percent) of the borrowers had a monthly income from $1,000 to $3,000 (or approximately $12,000-$36,000 annually). From 1990 to 2005, the majority of FHA Title I lending has been in southern states. Twenty states, primarily in the South, Southwest, and the Midwest, received more than 85 percent of the FHA Title I loans (see fig. 2). FHA’s Insurance Operations Division administers the Title I program, as well as a property improvement program. The majority of the staff and budget allocations are for the property improvement program. In fiscal year 2006, the division had a staff of nine and a total budget of $1.1 million, approximately $350,000 of which supported the manufactured home loan program. Available data on geographic and demographic characteristics of manufactured homes and their owners indicate that most manufactured homes were located in rural areas of the South and were occupied by lower-income earners who owned, rather than rented, the homes. The market for new manufactured homes declined significantly from 1996 to 2005, but homes that were purchased were larger in size and more often placed on private property. Although limited data were available on the number of manufactured home parks, regulatory, industry, and consumer officials from seven of the eight states in which we conducted interviews told us that manufactured home parks were closing because rising land values were driving redevelopment. Housing costs for manufactured homes were lower than costs for other housing types; however, the costs of moving manufactured homes were relatively high and options for placing homes in new locations were few, which affected owners’ mobility. Manufactured homes were located in every state, but were most often located in rural areas. According to 2000 Census data, manufactured homes were more concentrated in rural areas, particularly in the South and desert Southwest, as a share of total housing units (see fig. 3). In 2005, according to data from the American Housing Survey, approximately 6 percent of occupied homes in the U.S. are manufactured homes. The majority of the occupied manufactured homes (68.5 percent) were located in rural areas, while 31.5 percent were found in suburban areas and central cities. State, industry, and consumer officials in more than half of the states we reviewed also told us that manufactured homes were more likely to be located in either rural or suburban parts of their states. Compared regionally, manufactured homes represented a larger share of occupied homes in the South than in other areas of the nation. For instance, 10 percent of occupied housing in the South consisted of manufactured homes, compared with 6 percent in the West, 5 percent in the Midwest, and 2 percent in the Northeast (see fig. 4). Overall, in 2005, 57 percent of occupied manufactured homes were located in the South, 19 percent in the West, 17 percent in the Midwest, and 7 percent in the Northeast. Our analysis of 2005 American Housing Survey data showed that more occupants of manufactured homes were owners than renters (see fig. 5). A majority (79.5 percent) of those living in manufactured homes owned their homes, compared with 17.4 percent who rented their manufactured homes. Although those who lived in manufactured housing were more likely to own their homes, they tended to have lower annual incomes (see fig. 6). More owners of single-wide and double-wide homes earned less than $49,999 compared with owners of site-built homes, who were more likely to earn $50,000 or more. For example, in 2005, of all owners of single-wide homes, 15.1 percent earned $10,000 or less annually and 23.6 percent earned from $10,000 to $19,999. In comparison, 6 percent of owners of site- built homes earned $10,000 or less and 8.3 percent earned from $10,000 to $19,999. Almost half of all owners of manufactured homes earned less than $30,000 in 2005 (see fig. 7). More specifically, 49.4 percent of owners of manufactured homes earned this amount compared with 23.4 percent of owners of site-built homes. Officials we interviewed from six states told us that owners of manufactured homes were more likely to be low-income individuals. Apartment renters also were proportionally lower-income than owners of site-built homes, with 56.8 percent earning less than $30,000. The total number of new manufactured homes sold decreased from 1996 to 2005. According to Census data from the Manufactured Housing Survey, 332,000 new manufactured homes were sold in 1996 compared with 118,000 sold in 2005, a net decrease of 64.5 percent. California and Florida had the highest number of new manufactured home units sold in 2005, a change from 1996 when North Carolina and Texas reported the highest number sold. According to officials that we interviewed, several factors may have contributed to the decrease in manufactured home sales, such as lower interest rates available for site-built homes, the decrease in available financing for manufactured homes due to consolidation experienced in the industry, and a large number of repossessions that flooded the market with units and increased the supply of manufactured homes. For example, as a result of the decrease in financing options for manufactured homes, industry officials explained that manufacturers lowered production of manufactured homes and instead built more modular homes, because more financing options were available. Modular homes can often be built in the same factory as manufactured housing but are not required to meet the HUD Code. Although consumers purchased fewer new manufactured homes in 2005 than in 1996, according to the Census data from the Manufactured Housing Survey, they bought more double-wide or multisection homes. In 2005, 76 percent of the manufactured homes purchased were double-wides or larger, compared with 51 percent in 1996 (see fig. 8). However, FHA data shows 82 percent of the loans originated through FHA’s Title I Manufactured Home Loan program for fiscal years 2005 and 2006 were for the purchase of single-wide homes. Officials we interviewed attributed this trend to FHA loan limits that were too low to enable borrowers to purchase larger, multisection homes using guaranteed loans. Manufactured homes were more likely to be placed on private property. From 1996 to 2005, more new manufactured homes were placed on private property than in manufactured home parks, even though placements overall (in both parks and private property) decreased since 1996. According to data from the Manufactured Housing Survey, in 1996, 229,790 new manufactured homes were placed on private property compared with 88,420 homes placed inside a manufactured home park. In 2005, 80,757 manufactured homes were placed on private property and 28,850 were placed inside a manufactured home park (see fig. 9). Because FHA does not collect placement data, it is unclear where manufactured homes purchased with FHA Title I loans were located—on owned or leased land. However, FHA officials told us that, based on their review of lender insurance claims, most of the Title I loans are for manufactured homes on leased land. Similarly, officials we interviewed from five states reported that more placements were occurring on private property than in manufactured home parks. The officials cited a variety of reasons why new manufactured homes were more likely to be placed on private property. First, the lack of financing available for manufactured homes to be placed on leased land decreased the likelihood of units being placed in a manufactured home park. For example, one official stated that the lack of manufactured home financing resulted in more manufactured homes being placed on private land because of the increased availability of financing for homes that are considered real property. Second, the increase in the size of manufactured homes to double-wides or multisection could prevent the homes from fitting into park spaces designed for smaller units. Third, both industry and consumer officials suggested that the quality and style of new manufactured homes had improved, allowing them to blend in with other site-built homes on private property. Developers have created affordable housing opportunities by using manufactured homes in infill lots located in urban areas or subdivisions. For example, in Seattle, a community development corporation used manufactured homes to create affordable single-family and town homes in a development called Noji Gardens. In Kentucky, Frontier Housing, an affordable non-profit housing developer, built affordable housing communities using a combination of manufactured, modular, and site-built homes (see fig. 10). Data were not available on the number of manufactured home parks because states define and license them differently (see fig. 11). For example, the definition of a manufactured home park in New Hampshire is a parcel of land that accommodates two or more homes; however, in Florida, certain provisions apply to manufactured home parks with 10 or more homes. Moreover, most states do not require manufactured home parks to be licensed; this is typically done at the local level. As a result, data on the number of manufactured home parks in each state and at the national level are limited. Anecdotally, several officials we interviewed suggested that the creation of new manufactured parks was uncommon, with few parks being developed since the early 1980s. The officials suggested that local zoning restrictions prevented manufactured home parks from being built and that localities often preferred to promote other land use options to attract development with greater potential to raise the tax base. Officials from most of the states that we reviewed told us that most manufactured home park closings were caused by rising land prices and subsequent pressure to redevelop the site. Although anecdotal data indicate a number of manufactured home parks have closed, the extent to which closures have occurred is unknown. Through a database search of national and local newspapers, we found closures had occurred in 18 states between May 2005 and May 2007. In some cases, other types of housing (such as condominiums, town homes, and single-family homes) were built on the former park sites, while in other cases the parks were converted to commercial use. A few parks also were converted from investor-owned parks to resident-owned parks. In some instances, local municipalities tried to curb the number of closures by placing a moratorium on park owners selling to developers. Manufactured homes can be more affordable than other housing types. According to 2005 American Housing Survey data, monthly housing costs for manufactured homes generally were lower than for site-built homes (see fig. 12). More than half of the owners of manufactured homes (54.7 percent) had monthly housing costs from $100 to $499. In comparison, a little more than a quarter (27.4 percent) of owners of site-built homes had monthly housing costs from $100 to $499. The costs of moving manufactured homes can be high, and, according to state, industry, and consumer officials we interviewed, the cost-prohibitive nature of moving manufactured homes was one reason why owners moved them infrequently. Officials explained the price could range from $3,000 to $25,000. According to officials, a variety of factors influence moving costs, including distance of the move and the size of the home. In addition, moves involve set-up and dismantling costs, such as utility and other work to prepare the land. Several officials suggested that homeowners, particularly those on fixed incomes, could not afford the cost of moving their manufactured homes because they did not have the financial means to do so. As discussed later, in cases of park closures, some states have a relocation fund and, sometimes, property owners or developers might provide some funds for displaced residents to move their manufactured homes, assuming the displaced residents can find a place to move. Borrowers with loans for real property are generally entitled to a broader set of protections under a federal law governing the loan settlement process than borrowers with personal property loans. For instance, borrowers taking out loans for real property receive uniform settlement statements, as well as escrow statements. Additionally, although state law for situations of foreclosure (real property) and repossession (personal property) varies, consumer protections for foreclosure are generally broader than for repossession. Finally, tenant protections—involving issues such as the length of leases for land, requirements for notice and frequency of rent increases, notice of eviction, and park closures—vary across the eight states we reviewed, as did state aid for displaced residents of parks that closed. Generally, borrowers with personal property loans are entitled to fewer consumer protections under federal laws than borrowers with real property loans. Under the Truth in Lending Act (TILA), borrowers (including Title I borrowers) who purchase homes using personal property loans receive certain disclosures. For instance, creditors generally are required to provide the amount financed; the finance charge and the finance charge expressed as an annual percentage rate; the number, amount, and due dates or periods of payments; and the provisions for new payment, late payment, or prepayment. The disclosures are intended to make borrowers aware of the cost of the loan and policies for paying the loan, so that lenders cannot charge arbitrary rates or implement policies that are not disclosed to the borrower. Borrowers who take out loans for the purchase of real property are entitled to additional protections under the Real Estate Settlement Procedures Act (RESPA), which is intended to ensure that consumers receive information on the nature and costs of the real estate settlement process and are protected from unnecessarily high settlement charges caused by certain abusive practices. RESPA also protects Title I borrowers or other buyers of manufactured homes if their federally related mortgage loans are secured by land on which a manufactured home sits or on which a manufactured home will be placed within 2 years. Borrowers are entitled to receive a good faith estimate of settlement costs within 3 days of submitting a loan application. At settlement, RESPA requires a uniform settlement statement that shows all charges in connection with the settlement both before and at the time of the settlement. RESPA also requires an initial escrow statement that itemizes the estimated taxes, insurance premiums, and other charges expected to be paid from the escrow account in the first year. RESPA generally prohibits kickbacks and unearned fees for settlement services and charges for the preparation of certain documents. Additional disclosure requirements—an annual escrow statement that summarizes deposits and payments and a servicing transfer statement if the loan is transferred to a different lender—apply after the loan is settled. State law generally provides more consumer protections in connection with foreclosures of real property than in connection with repossessions of personal property; however, borrowers in certain federally insured loan programs receive additional protections. Depending on state law and the mortgage contract, the two most common methods of foreclosure are judicial foreclosure and nonjudicial foreclosure by power of sale. The level of protections to the homeowner in case of a foreclosure varies by state. All states let the homeowner redeem the mortgage by paying off the total outstanding debt before the sale. However, only some states let a homeowner cure a default by paying the installments due and the costs to reinstate the loan prior to the resale of the home. Some states may allow the homeowner time to redeem the property from the purchaser, which often is the lender, after the foreclosure sale by paying the purchase price for the home, plus related costs and interest. For example, in North Carolina, a homeowner has 10 days to redeem the property after the foreclosure sale. Personal property loans typically are subject to repossession rather than foreclosure. As with real property, the procedures can be judicial or nonjudicial. Generally, creditors use judicial action procedures to repossess manufactured homes. The Uniform Commercial Code, a model code adopted by states in various forms, also authorizes a secured party, upon default, to take possession of the collateral without judicial process—self-help repossession—if that can be done without breach of the peace. Because it may be difficult to avoid breaching the peace when repossessing manufactured homes, this process is not likely to be used often with manufactured homes. Time frame and notice requirements for repossession can be less stringent than the corresponding requirements for foreclosures. For example, the Uniform Commercial Code does not prevent a creditor from immediately accelerating the note and repossessing the collateral; however, some states do impose restrictions on acceleration of repossession. Of the eight states we reviewed, five have provisions in place that permit acceleration of repossession in certain transactions only when the borrower is in default or in breach of the agreement or when contract terms permit it under certain conditions. In addition, some state statutes provide a right to cure a default prior to the acceleration or repossession of a manufactured home and for other consumer transactions in certain cases. However, Title I Manufactured Home Loan borrowers are entitled to additional protections under FHA regulations. For instance, lenders may not begin the process of repossession or foreclose on a property securing a Title I loan in default unless the property has been serviced in a timely manner and with diligence and reasonable and prudent measures have been taken to get the borrower to bring the loan account current. Title I borrowers, like borrowers in certain other federally-insured loan programs, are entitled to receive written notice of their default. For Title I borrowers, this notice includes a description of the lender’s security interest, a statement of the nature of the default and the amount due, a demand upon the borrower to either cure the default or agree to a modification agreement or a repayment plan, and a statement that if the borrower fails to either cure the default or agree to a modification or a repayment plan within 30 days of the notice, the maturity of the loan is accelerated and full payment is required. Further, for federal home loans that HUD, the Department of Veterans Affairs, and Rural Housing Service guarantee, a lender cannot start foreclosure proceedings for a default in payment until at least three full monthly installments are past due. Tenant protection issues affecting owners of manufactured homes include the length of the leases for land, rent increases, requirements for eviction, and park closures. We analyzed state laws in eight states and found varying written lease requirements (see fig. 13). For instance, five of eight states have provisions for written lease requirements. The terms range from any amount of time agreed upon by the landlord and tenant to a minimum of 2 years. However, officials in some states with whom we spoke suggested that enforcing this requirement was difficult. Notice of rent increases range from 60 to 90 days; however, some states do not have notice requirements on rent increases, such as Georgia, Missouri, North Carolina, and Texas. States also qualify the rent increase provisions in varied ways. Arizona provides that rents generally can only increase upon renewal or expiration of the lease and the owner has to give 90 days notice. New Hampshire requires 60 days notice to raise rents but is silent on the number of times the rent can increase in a given year. Industry and consumer officials suggested the lack of ability to control monthly payments created additional risk for both lenders and borrowers. Unlike owners of site-built homes, owners of manufactured homes living on leased land can be subject to eviction for nonpayment of rent or noncompliance with terms in lease agreements. Additionally, nonpayment of rent can be a signal that the homeowner is behind on loan payments as well. All states that we reviewed require good cause for eviction; however, the amount of time that the affected party has to cure the cause for the eviction (that is, to bring the late rent payments current) ranges from 7 to 30 days from receipt of notice. Failure to cure an eviction for an owner of manufactured home on leased land could require the homeowner to move from the manufactured home park. However, as mentioned earlier, such a move may be cost-prohibitive. Homeowners also can be forced to move because parks close. Notice requirements for those residents that had to move for this reason vary from 120 to 545 days in the states we reviewed (see fig. 14). But the states we reviewed also have a range of tools to aid the displaced owner of a manufactured home, such as offering the park residents the right of first refusal (the first opportunity to bid on the purchase of the park) and also offering relocation funds or tax credits for displaced residents. For example, one of the eight states we reviewed offers residents the right of first refusal. However, although Arizona, New Hampshire, and Oregon do not have a right of first refusal law, the states do have laws that provide notice of the park sale and time in which to prepare a bid. In New Hampshire, state law requires both the park tenants and state financing agency receive notice when a manufactured home park is sold. The New Hampshire Community Loan Fund then works with the park tenants to form a nonprofit cooperative in which the tenants would own both the land and their homes. Three states we reviewed have a relocation fund or tax credit for displaced residents (Arizona, Florida, and Oregon). Some interviewees suggested that in some park closures, especially those with a lot of publicity, the developer or buyer of the land would partially compensate the displaced residents. Although a few states offer relocation funds for displaced manufactured home residents, officials from all states we reviewed cited potential barriers in finding a place to relocate the homes, such as a lack of vacancies in nearby parks, age requirements that park owners or municipalities place on units, and costs associated with moving and relocating homes. For instance, many parks will not allow homes built before 1976, and localities in some states may have laws prohibiting placement of homes that are more than 5 or 10 years old. Further, in states such as Florida, wind zone requirements for certain areas may prevent the relocation of a home not rated (certified) to withstand winds of certain speeds. In addition to costs, officials also cited potential damage to the home as a barrier to movement. Legislative proposals to change the Title I program would increase loan limits; insure each loan made; incorporate stricter underwriting requirements; and establish up-front premiums and adjust annual premiums; but the potential effects of the changes on the program and the insurance fund are unclear. According to some FHA and industry officials, the potential benefits for borrowers include larger loans with lower interest rates to buy larger homes. Also, increased access to financing for borrowers could occur since more lenders would be more likely to participate in the program because individual loans could be insured. Industry officials also identified several factors unique to manufactured home lending, such as the decreased ability of borrowers to build equity, the location of the home (owned or leased land), and the cost of recovery to the lender after defaults that can increase the risks of manufactured home lending. To illustrate the effects of the proposed changes, we developed an approach that used variations of unique risk factors associated with manufactured home lending, as well as commonly used predictors of loan performance, such as credit scores, to illustrate default scenarios. Our analysis suggests that loans for homes on leased land and to borrowers with poor credit have greater risk of default. And, in all instances where borrowers had medium or high default risk, we show the fund experiencing a loss. However, FHA has not yet assessed risks associated with the proposals or detailed changes to its underwriting requirements. The agency also has not yet collected data needed to help assess risks such as credit scores and land type. FHA officials explained that it had not done so because the Title I program was low-volume and because they were unsure if the legislation would pass. FHA officials said that they chose to devote their resources to changing the much larger Title II program. As a result, the effects of the proposed changes to the Title I program are unclear. Several bills introduced in Congress from 2005 to 2007 detailed proposed changes to the Title I Manufactured Home Loan program, but the majority of the bills contained similar provisions. For example, all would increase the loan limits of the program and index them annually. In the latest bill that passed the House in May 2007, the loan amount for a home-only loan would increase from $48,600 to $69,678. For the land-only loan, the loan limit would increase from $16,200 to $23,226 and for combined home and land loans, the loan limit would increase from $64,800 to $92,904. All but one of the bills would require a change to the mechanism that FHA uses to insure against its insurance risk. Currently, FHA accounts for its insurance risk by insuring only a portion (10 percent) of a lender’s Title I Manufactured Loan portfolio. For example, if a lender’s portfolio in a given year totaled $1,000,000, FHA’s guarantee to the lender would not exceed $100,000. The proposed legislation removes the portfolio cap and insures each loan on an individual basis. However, the current risk-sharing mechanism on individual loans between FHA and lenders (where FHA covers 90 percent of the loss if there is a claim on a defaulted loan and the lender absorbs the remaining 10 percent) would not change. Moreover, FHA would be required to establish specific underwriting criteria to ensure the financial soundness of the program within 6 months of the passage of the legislation. Currently, Title I regulations require a lender to exercise prudence and diligence in underwriting a loan to determine whether the borrower is an acceptable credit risk, such as requiring lenders to conduct a credit investigation and obtain a credit report. But the Title I regulations do not contain provisions that would address other factors specific to manufactured homes, such as whether the home is placed on owned or leased land. For Title I, FHA reviews the lender’s underwriting only when a default occurs within the first 2 years of the loan and a lender submits a claim for insurance. FHA then has 2 years to deny a claim for insurance even after FHA has certified the claim for payment. The proposed legislation also would require FHA to provide incontestable insurance endorsements, meaning that no claim could be denied because of underwriting issues—absent fraud or misrepresentation. All but one of the bills would establish up-front mortgage insurance premiums, not to exceed 2.25 percent of the loan amount and adjust the annual insurance premiums up to 1 percent of the remaining unpaid principal balance, rather than the original loan amount as stipulated in current law. The remaining bill would give the agency flexibility to establish premiums through risk-based pricing. If such a provision were made law, FHA officials told us that they would provide a range of premiums based on historical analysis of FHA loan data. Furthermore, all but one of the bills would require operations at negative subsidies—that is, without cost to the government. Currently, the Title I program is operated at a positive subsidy, meaning that the present value of estimated cash outflows (such as claims) to FHA’s General Insurance Fund exceed the present value of the estimated cash inflows (such as borrower premiums). According to the Federal Credit Supplement, FHA’s Title I Manufactured Home Loan program is expected to require a $487,000 subsidy in fiscal year 2007 and a $76,000 subsidy in 2008. FHA officials state it is unlikely the program could generate negative subsidies because of the proposed premium structure and potential for depreciation of the assets underlying the loans (the manufactured homes). Some of the bills also would require that the claim and disposition of property for the Title I program be similar to the Title II program, where FHA disposes of the used homes once the lender receives insurance benefits. FHA opposes this change to the bill and proposes to continue having the lenders dispose of the property. As discussed later, recovery cost for manufactured housing are higher than for other types of housing and lenders require strong recovery practices, such as a network for selling homes in place, to recoup more than half the loan balance after a default. FHA and lending industry officials with whom we spoke cited benefits that could accrue to borrowers, the industry, and the Title I program if the proposed legislation were enacted. These officials suggested that increasing the loan limits would allow more borrowers to buy manufactured homes at lower interest rates and also larger homes. As noted earlier, in recent years buyers have expressed a heavy preference for purchasing double-wide or multisection units. FHA, Ginnie Mae, and lending industry officials also suggested that increasing the limits and eliminating the portfolio cap would increase lender participation and demand for Title I loans, which in turn could increase competition and decrease borrower interest rates. In particular, Ginnie Mae officials stressed that eliminating the portfolio cap would be central to their decision to expand their participation in the secondary market for manufactured home loans. This, in turn, could provide more liquidity to lenders and greater access to credit for borrowers. Ginnie Mae was the main guarantor of securities backed by FHA Title I loans on the secondary market up until 1989 when Ginnie Mae placed a moratorium on new manufactured housing issuers because of the high risks associated with the product. Currently, Ginnie Mae has four lenders in its manufactured home program with just one active. According to Ginnie Mae officials, it imposed the moratorium because structural features of the Title I program, such as the portfolio cap and the nonspecific underwriting requirements, exposed Ginnie Mae to greater risk and losses. According to Ginnie Mae, once claim amounts were reached on troubled portfolios, lenders had little incentive to continue servicing the portfolios and make payments to security holders. Ginnie Mae then sustained substantial losses when it assumed the portfolios of lenders that reached FHA coverage limits. In addition, one lending official suggested more stringent underwriting requirements would be beneficial to the industry, which still is recovering from the defaults and repossessions of the early 2000s. Industry officials suggested that federal agencies, such as FHA, and the government-sponsored enterprises could help facilitate changes in the industry, such as improving underwriting requirements. However, according to FHA and the Congressional Budget Office (CBO), the elimination of the portfolio cap could increase significantly the amount of claims paid and expand the government’s liability under the program since each loan would be insured on an individual basis. FHA officials also said that they believed risk-based pricing would help compensate FHA’s insurance risk. The extent to which risk reduction would occur and what borrowers would be excluded would depend on underwriting requirements, such as the ranges of credit scores allowed. Industry officials identified several risk factors unique to manufactured home lending, such as the decreased ability of borrowers to build equity, the lack of consistency and transparency in appraising and pricing homes, the location of the home (owned or leased land), the cost of recovery to the lender after defaults, and issues related to the installation of the home. Based on our review of literature and interviews with lending industry officials, owners of manufactured homes generally have less ability to build equity than the owners of site-built homes. As assets, manufactured home can depreciate in value after purchase, similar to automobiles. For example, officials explained that manufactured homes bought with personal property loans generally depreciated in value if not attached to land. The officials emphasized that, even after years of making payments, a borrower could choose to default on a loan if the home was worth less than the loan balance. In general, manufactured homes are appraised differently when considered real property compared to personal property. When a home is placed on real property, the value of the home is determined based on comparable homes in the vicinity. When a home is considered personal property, the value is based on the price that the manufactured home dealer had determined for the unit. However, lending officials with whom we spoke suggested prices varied by dealers and that pricing of manufactured homes was not transparent because dealers are not mandated to display a manufacturer’s suggested retail price. In addition, states conduct little or no recording of sales data. Further, the officials suggested that the lack of transparency resulted in some consumers overpaying for the manufactured home, particularly in instances where the dealer would present the price in monthly payment terms. One lender identified California as a model state, because it requires all manufactured home purchases to go through escrow, whether real or personal property, which helps to monitor sales prices. This lender’s loans performed significantly better in California than in other states, and the lender suggested that California’s transparent pricing was one of the main reasons. Furthermore, industry officials suggested that the location of the home on owned or leased land is a predictor of loan performance. According to our review of literature and interviews with industry officials, loans for manufactured homes placed on owned land (titled as real property) tend to perform better than loans for homes on leased land (titled as personal property), as they tend to appreciate more. Some officials suggested appreciation could occur on leased land, but that appreciation would be dependant on location and amenities available (such as pools, club houses, or golf courses). In contrast to other housing types, many lending industry officials suggested that the cost of recovery for lenders when a loan defaulted was greater with manufactured homes. For instance, for manufactured homes the costs to the lender in a foreclosure or repossession (which may involve the movement of the home) would be proportionately higher relative to the loan amount than for more expensive site-built housing. Some states have lien holder statutes in place that may help the lender protect its collateral in cases of borrower default by requiring notification of lenders in case of abandonment or eviction. Of the states we reviewed, Arizona, New Hampshire, Oregon, and Texas have such a statute. Some lending industry officials suggested that losses and high recovery costs could be mitigated by selling the home in place. They suggested that lease agreements between lenders and community owners should ensure that manufactured homes located on leased land could be sold in place if borrowers defaulted. According to some of lending officials we interviewed, the size of the home also was a predictor of performance. Loans for larger manufactured homes (double-wides or multisection units) tend to perform better than loans for single-wides. The officials with whom we spoke suggested that these loans performed better because the income level of those borrowers tended to be higher. However, the majority of Title I loans have been for single-wides, which according to FHA and industry officials was because of the current loan limits. In addition, many industry officials suggested the type and quality of installation of the home affects the value of the home and that, in theory, states with stronger inspection programs help maintain the value of the home for the consumer. The Manufactured Housing Improvement Act of 2000 set standards for installation inspections across the country, but states continue to differ in how they monitor installation of homes. Until recently, many states did not have a program to inspect the installation of manufactured homes. In our review of eight state installation programs, we found the level of inspections varied by state (see fig. 15). For example, five of eight states require 100 percent inspection (Arizona, Florida, New Hampshire, North Carolina, and Oregon). All of those that require 100 percent inspection had installation programs in place prior to the implementation of the Manufactured Housing Improvement Act, except for New Hampshire, whose requirement went into effect in July 2006. The remaining states relied on state officials inspecting from at least one manufactured home installer in Georgia or from 10 to 35 percent of manufactured homes in Missouri and Texas, but Georgia and Missouri made changes to their installation programs after the passage of the act. Prior to these changes, the two states inspected installations on a consumer complaint basis. Further, state programs differ in how they conduct installation inspections. For instance, Florida, New Hampshire, and North Carolina rely on local jurisdictions to conduct the inspections; Arizona and Oregon use a combination of both state and local officials; while Georgia, Missouri, and Texas use only state officials. In the absence of available data on the credit of FHA borrowers and the location of the homes (owned or leased land), we developed scenarios using assumptions based on various risk factors, such as the default risk of borrowers and the ability of lenders to recover losses. In addition, we considered the experience of FHA’s Title I program since 1990 and of non- FHA personal property manufactured housing loans. For example, from 1990 to 2002, FHA’s cumulative defaults expressed as a percentage of originated loans, did not drop below 10 percent and have exceeded 25 percent in 8 of the 13 years (see fig. 16). However, loans from 2003 to 2006 may not be reflective of the default experience because they are recent loans and lending industry officials explained that the peak default period for these types of loans generally occurs from the third to the fifth year. Non-FHA manufactured housing loans also had high cumulative losses, typically above 15 percent for loans originated between 1997 and 2001, but lower than FHA’s cumulative losses. Our scenarios incorporate assumptions based on factors such as annual default rates for different yearly intervals, loan interest rates, and loan terms. Once we established these parameters, we included additional factors, such as variations on the lenders’ ability to recover their losses in cases of default and the borrowers’ insurance premium schedule (based on the premiums suggested in the proposed legislation). Our assumptions about default rates reflect an important characteristic of home-only manufactured housing loans. Even after years of loan payments, a borrower may not have enough equity in the home to avoid a default in the face of adverse financial conditions or may choose not to pay off a loan if the home is worth less than the loan balance. Based on discussions with lending industry officials and our review of available manufactured home lending data, we assumed three variations of default: a low default experience, a moderate default experience, and a high default experience. In general, the low default experience would reflect conditions in which borrowers possessed good credit quality (credit score), lenders used high-quality underwriting requirements, and lenders’ security interests in the collateral were well protected in terms of those factors that are associated with the preservation of value, such as placement of the home (owned versus leased land) and installation. The high default experience would reflect conditions in which borrowers had poorer credit quality, and collateral values and lenders’ security interests also were lower. The following assumptions in our analysis were based on discussions with lending industry officials on possible recovery outcomes and possible legislative changes regarding FHA’s upfront and annual premiums: If lenders had a strong recovery program (which may include a good network of dealers who resell manufactured homes) they would achieve a net recovery of 50 percent per claim. Alternatively, we assumed a moderate recovery rate would be 33 percent of the claim, and a low recovery rate would be 25 percent. Two different potential up-front premium amounts—a high up-front premium of 2.25 percent of the original loan amount and a low up-front premium of 1 percent of the original loan amount. Two different annual premiums—a high annual premium of 1 percent of the declining loan balance and a low annual premium of 0.5 percent of the declining loan balance. To determine the potential impact on FHA’s General Insurance Fund, we used the above assumptions to calculate the relationship between the amount and timing of both expected claims and premiums to FHA. Similar to a subsidy calculation, we estimated the present value of estimated cash outflows (such as claims) net of the present value of the estimated cash inflows (such as premiums) to FHA’s General Insurance Fund. The results of our analysis show that in all instances where borrowers had moderate or high default risk, the fund experienced a loss—that is, the present value of estimated cash outflows exceeded the present value of cash inflows (see fig. 17). The range of the loss was determined by the lender’s ability to recover its losses and the premiums the borrower paid. For instance, in cases where the borrower paid high up-front and annual premiums (2.25 percent and 1 percent, respectively) and was a moderate default risk, and the lender had a high net recovery rate (50 percent), the loss to the fund was less than 1 percent. However, if we varied the scenario to lower the recovery rate for lenders (25 percent), the potential loss to the fund was 4.4 percent. Similarly, when the borrower paid low up-front and annual premiums (1 percent and 0.5 percent) and had moderate default risk, the losses ranged from 4.4 percent if the lender had a high net recovery rate to 8.5 percent if the lender had a low recovery rate. The fund had the potential to experience gains in instances where the borrower had low default risk, premiums were higher, and lenders had a higher probability of high recovery of losses. Our analysis also showed that there is potential for FHA’s General Insurance Fund to experience a wide variation in the level of losses but little potential for gains. The results suggested that there is greater risk of loss from borrowers who have either moderate or high default risk. Typically, these are loans where the borrower may not have a high credit score, and the property is located on leased land—in which case the lender’s security interest may be uncertain because of the variability associated with rent increases, lease terms, and the potential for the manufactured home park to be sold. In addition, the amount of the loss was influenced by the amount of premiums paid. For instance, where borrowers paid the highest up-front and annual premium, the loss was 11 percent in cases where the borrower also had high default risk and the lender had low recovery, compared with a 15 percent loss in instances where the borrower paid a low up-front and annual premium. However, since FHA does not currently collect data on credit score or where the property is located—owned or leased land—it is unclear how these scenarios may actually affect the General Insurance Fund. See appendix II for a more detailed discussion of our scenario analysis methodology. FHA has not yet assessed the effect of the proposed changes to the Title I program. More specifically, it has not developed criteria or models to assess the potential effects of the proposed premiums or risk-based pricing or developed specific underwriting requirements. Such assessments and requirements are central to effective operation and oversight of a revised Title I Manufactured Home Loan program. Our internal control standards for federal agencies state that effective management involves comprehensively identifying risk as part of short- and long-term planning. Such planning would encompass the identification of risks posed by new legislation or regulations. The results of our scenario analysis also suggest that FHA could use modeling to illustrate, in a general way, potential gains and losses to FHA’s General Insurance Fund and that premium structures play a key role in determining these outcomes. Although the purpose of the Title I program is to serve low- to moderate- income families, it is unclear which borrowers a revised program would serve because FHA has not yet shared the specifics as to how it plans to compensate for risk, including how the premiums would be set. According to FHA officials, they do not plan to develop criteria for assessments of proposed premiums or risk-based pricing until the program is approved by Congress. FHA officials told us that they have begun to analyze a range of up-front premiums and a maximum premium amount based on a historical analysis of receipts and claims, but that they had not yet reached any conclusions. In addition, FHA has not conducted an analysis to determine under what conditions the program could operate at a negative subsidy if the proposed changes were enacted. As mentioned earlier, FHA’s Title I Manufactured Home Loan program is expected to require a $487,000 subsidy in fiscal year 2007 and a $76,000 subsidy in 2008. According to HUD officials, they expect to calculate the new subsidy rate based on projected defaults, interest and fees, and loan characteristics (such as loan maturity, default and recovery rates, and up-front and annual fees) for the 2009 budget. CBO estimated that, if the legislation were enacted, FHA could achieve a near zero subsidy for the Title I program assuming default rates would be at 9.5 percent or lower. CBO also acknowledged that because of the uniqueness of FHA’s program and lack of comparative programs in the private market, the potential costs of the program are uncertain. The results of our analysis suggest that, in almost all situations, there is potential for loss except when borrowers have lower default risk (based on credit scores and other information). While credit score is one of the key factors used to determine default risk, FHA does not collect this information (discussed further below). FHA officials also stated they have not yet developed specific underwriting requirements for a revised program. Although industry and FHA officials with whom we spoke discussed the unique risks for manufactured home loans, the information FHA provided us about any changes to its underwriting criteria have not addressed the specific characteristics of manufactured housing. FHA officials did explain they would like to establish review procedures when a loan is submitted for insurance, similar to the procedures in FHA’s Title II loan program. In Title II, FHA conducts post-endorsement reviews of 10 percent of its loans, with FHA staff going over the lender’s underwriting decisions and calculations. In explaining the agency’s limited assessments, FHA officials noted that the agency is focusing its resources on assessing the impact of proposed changes to the much larger Title II Mortgage Insurance program. As of May 2007, FHA’s risk-based pricing proposal for the Title II program established six different risk categories, each with a different premium rate, for purchase and refinance loans. FHA used data from its most recent actuarial review to establish six risk categories and corresponding premiums based on the relative performance of loans with various combinations of loan-to-value ratio (the ratio of the amount of the mortgage loan to the value of the home) and credit score. Further, since the current volume of the Title I program is low, FHA officials did not anticipate large losses for the fund. However, the programmatic changes in the proposed legislation are designed to increase the demand for Title I manufactured home loans. FHA officials told us that once the legislation is passed, it would take up to a year to implement changes to the program and to work on developing the risk-based pricing strategy; however, they were unsure if they would implement the program in stages or all at once. As a result of FHA not conducting risk assessments or determining underwriting requirements, potential effects of changes to the Title I program remain unclear. Without such risk identification, FHA’s planning may be adversely affected. In particular, the agency may lack timely indications of whether the program could generate positive or negative subsidies, which in turn would affect decisions about pricing premiums. Currently, FHA does not collect information on the credit scores of borrowers or the type of land on which manufactured homes are placed. Our internal control standards for federal agencies state that an agency must have relevant, timely, and reliable information to run and control its operations. Of the factors identified as risks affecting manufactured home lending, FHA maintains data only on the size and condition (that is, new or existing) of the manufactured home. In 2004, FHA started to collect information on borrower demographics, such as gender, address, birth date, and monthly income. And, because FHA currently monitors a lender only when a claim is filed for insurance and not before the loan is originated, the information collected is not as thorough as would be generated if the program required review prior to the endorsement of the loan. FHA officials told us it would like lenders to electronically capture more information about borrowers during the underwriting process, but that the current information system for the Title I program would need to be updated to accommodate expanded data fields. FHA officials also told us that they plan to collect more detailed borrower, property, and loan-level data to improve tracking and performance measurement, but did not have specific details as of July 2007. However, our interviews with lending officials and the results of our scenario analysis both suggest that credit score and the location of the home (on owned or leased land) are important predictors of loan performance. Without more comprehensive data on its borrowers and lenders, FHA may not be able to successfully estimate default risks in its portfolio, mitigate risks to the insurance fund, and, thus, effectively manage the program. Manufactured homes are an affordable housing option, but they differ from site-built homes in the way they are financed, sold, and the consumer protections available. These differences create additional risks for both the borrowers and lenders of manufactured homes. For example, the ability for the homeowner to build equity is constrained if the property is located on leased land and the land ownership also affects the ability of the lender to recover its losses relative to other types of lending. These risks are reflected in the performance of the Title I program, which has a history of high default rates, as does the manufactured home lending industry. However, the Title I program also provides a unique product as the only active federal program offering insurance for home-only (personal property) loans. According to recent FHA data, the majority of its borrowers are younger and lower-income, suggesting that Title I helps them achieve homeownership. But changing and expanding a lending program can introduce new risks and increase existing risks. FHA only insured slightly more than 1,400 loans in 2006. Changes to the Title I program are expected to increase loan volume, which could generate the desirable outcome of providing more lower-priced loans to lower-income individuals desiring to purchase a home. Yet, both FHA and CBO suggest proposed changes can increase FHA’s insurance risk and expand the government’s liability. The extent of gains or losses to FHA’s General Insurance Fund will depend on a variety of factors, such as the borrower’s default risk, the lender’s ability to recover losses, and the amount of premiums paid. However, FHA has not articulated which borrowers would be served, how the loans would be priced under a risk-based structure and the expected increase in risk to the General Insurance Fund, how the loans would be underwritten, and the additional data it plans to collect to manage the program. Thus, the agency lacks vital information for implementing any changes to the program. If FHA were to conduct such risk identification, it could plan to anticipate changes to the program, target new borrower populations, and more effectively manage existing loan portfolios. In particular, with indications of whether the program could generate positive or negative subsidies, the agency could make appropriate and well-informed decisions about pricing premiums. For example, an analysis similar to the one we performed would provide at least an indication of what scenarios would produce the highest risks for losses to the fund. Finally, more comprehensive data on its borrowers and lenders could allow FHA to mitigate the risks inherent with the manufactured home product. In light of the growth that a revised Title I program could spur and previous experience in the manufactured home loan industry that included a high number of defaults and repossessions, prior to the implementation of a revised program, we recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Housing and Urban Development—Federal Housing Commissioner to assess the effects of the proposed changes. At a minimum, this action should articulate which borrowers would be served if the program were expanded, including the financial conditions and creditworthiness of the served borrowers; develop criteria or economic models to assess the potential effect of the proposed changes including risk-based pricing; that is, determine what circumstances or pricing structures would most likely result in a positive or negative subsidy if the proposed changes were enacted; and develop detailed proposed changes to its underwriting requirements that account for unique attributes of manufactured housing and the characteristics of FHA’s targeted borrower population. We also recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Housing and Urban Development—Federal Housing Commissioner to develop an approach for collecting the information needed to manage the program, including the credit scores of borrowers and whether the manufactured homes are on owned or leased land. We provided HUD with a draft of this report for review and comment. HUD provided comments in a letter from the Assistant Secretary for Housing—Federal Housing Commissioner (see app. III). HUD agreed with the recommendations in our report and described plans for implementing these recommendations. More specifically, HUD agreed with our recommendation to assess the effects of the proposed changes prior to the implementation of a revised program. FHA noted that it recently initiated a review of the credit subsidy calculation for the Title I Manufactured Home Loan program and that the results of the study will be used to develop models to test underwriting and premium pricing options. As we noted in our report, this type of analysis or an analysis similar to the one we performed could provide an indication of the risks for losses to FHA’s General Insurance Fund. HUD also agreed with our recommendation to develop an approach for collecting the information needed to manage the program. As we mentioned in our report, HUD stated it began collecting additional data, such as borrower information on age and income in 2004. HUD stated that it did not collect information on the location of the homes (owned or leased land) because the program requirements for both types of homes were essentially the same; however, HUD plans to collect these data under a revised program to track loan characteristics. HUD also agreed to collect appropriate credit and application variables such as credit scores. Finally, the agency noted that it intended procedures for originating and underwriting Title I loans to mimic those of FHA’s real estate financing programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Member, Senate Committee on Banking, Housing, and Urban Affairs; Ranking Member, Subcommittee on Housing, Transportation, and Community Development, Senate Committee on Banking, Housing, and Urban Affairs; Chairman and Ranking Member, House Committee on Financial Services; and Chairman and Ranking Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services. We will also send copies to the Secretary of Housing and Urban Development and will make copies available to other interested parties upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The Chairmen of the Senate Committee on Banking, Housing, and Urban Affairs and its Subcommittee on Housing, Transportation, and Community Development and Senator Jack Reed requested that we evaluate the Federal Housing Administration’s (FHA) Title I Manufactured Home Loan program. Specifically, the objectives of this report were to (1) describe selected characteristics of manufactured housing and the demographics of the owners, (2) compare federal and state consumer and tenant protections for owners of manufactured homes, and (3) describe the proposed changes to FHA’s Title I Manufactured Home Loan program and assess potential benefits and costs to borrowers and the federal government. In summary, to address our first objective we analyzed Census data from the Manufactured Housing Survey from 1996 to 2005 and the 2005 American Housing Survey. To address our second objective, we researched relevant federal laws and laws in eight states (Arizona, Florida, Georgia, Missouri, New Hampshire, North Carolina, Oregon, and Texas) and conducted semistructured phone interviews with state, industry, and consumer group officials in those eight states. We also used the information gathered in the interviews to inform our discussion in the first and third objectives. For our third objective, we interviewed FHA officials and lending officials from programs that provide financing for manufactured homes. To learn about risk-mitigation practices, we also reviewed policies and procedures from the programs mentioned above. Finally, we conducted an analysis using different scenarios that incorporated assumptions of risk for manufactured housing lending to illustrate potential costs of the proposed legislation. We conducted our work in Washington, D.C., Atlanta, and Chicago, from October 2006 through June 2007 in accordance with generally accepted government auditing standards. To determine selected characteristics of manufactured housing, we analyzed Census data from the Manufactured Housing Survey. Census conducts the Manufactured Housing Survey on a monthly basis and samples approximately 350 manufactured home dealers or 1 in 40 of the manufacturers that ship manufactured homes each month. The sample of manufactured home dealers surveyed fluctuates based on the total number of manufactured homes shipped. Specifically, we used Manufactured Housing Survey data from 1996 through 2005 to examine trends in the manufactured housing industry, such as the number of homes sold, average sales price, where the homes were placed (owned or leased land), and the size of these homes (single-wide versus double-wide units). To determine demographic characteristics of manufactured home owners, we relied on the 2005 American Housing Survey. Census conducts the American Housing Survey every 2 years, sampling approximately 55,000 housing units to gather data on apartments; single-family homes; manufactured or mobile homes; vacant housing units; age, sex, race and income of householders; housing and neighborhood quality; housing costs; equipment and fuels; and the size of the housing units. We choose to use 2005 American Housing Survey data since they were the latest available. We did not provide information on trends in earlier years because the sample of manufactured housing used in previous surveys (through 2003) changed, making it difficult to compare 2005 data with previous data. Data on land ownership for manufactured homes (that is, owned or leased land) was limited in the Manufactured Housing and American Housing Surveys; as a result, we could not report differences in the data for where the manufactured home was placed. We assessed the reliability of the Manufactured Housing and American Housing Surveys by reviewing information about the data, performing electronic data testing to detect errors in completeness and reasonableness, and interviewing knowledgeable officials regarding the quality of the data. We determined that the data were sufficiently reliable for the purposes of this report. Because Census data used in our American Housing Survey analyses are estimated based on a probability sample, each estimate is based on just one of a large number of samples that could have been drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular sample’s results as a confidence interval. For example, the estimated percentage of occupied manufactured homes located in the South was 56.7 percent, and the confidence interval for this estimate ranges from 56.6 percent to 56.8 percent, with a percentage point error of 0.1 percent. This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent (or more) confident that each of the confidence intervals in this report will include the true values in the study population. All variables from American Housing Survey that are included in this report have 95 percent confidence intervals of plus or minus 5 percentage points or less. We conducted a literature review and examined relevant studies on manufactured housing. We also conducted a review of newspaper articles from May 2005 to May 2007 to identify where manufactured home park closures occurred in the United States. Because states collect different types of information on manufactured home parks and even define them differently, the consequent variability of the state data makes determining the number of manufactured home parks extremely difficult. Thus, we relied on a database search of national and local newspapers to provide anecdotal information on park closures. We used several different search parameters and keyword searches and identified park closures in 18 states; however, it is possible that other closures occurred in other states during the period we reviewed, but were not identified in our searches. To compare federal and state consumer and tenant protections for owners of manufactured homes, we reviewed federal laws relevant to manufactured housing, such as the Real Estate Settlement Procedures Act and the Truth in Lending Act. We reviewed prior work on state laws for manufactured housing conducted by the National Consumer Law Center and the American Association of Retired People and also interviewed officials from these organizations. We then selected eight states and reviewed statutes related to the consumer protections provided for foreclosure and repossession and the tenant protections applicable to contracts or acts, such as written lease requirements, rent increases, evictions, and park closures. The eight states were selected based on a combination of factors including the volume of FHA Title I loans in the state from 1990 through the first quarter of 2007; concentration of manufactured housing as a percentage of housing units in the state; information from our interviews of industry and consumer officials; and previous studies conducted on manufactured housing. The table below indicated the characteristics of the states we reviewed. We also conducted semistructured interviews with regulatory, industry, and consumer officials in each state. We pretested our interview questions on-site in Georgia and conducted the remaining interviews by telephone. We used interview responses on state statutes to check our interpretation of the state statutes containing consumer and tenant protections applicable to manufactured home owners. In each of the states, we interviewed officials who represented (1) the state regulator for manufactured housing, (2) the state industry group who are affiliates of the National Manufactured Housing Industry group, known as the Manufactured Housing Institute, and (3) a consumer advocacy group, such as the state manufactured homeowner’s association. In total, we conducted 25 interviews across the eight states. To synthesize interview data, we compiled the responses by interview question into a document for each state (state summary), which we reviewed for accuracy and completeness. Next, we identified themes among the interviews and created categories within a response, noting the state and type of official interviewed. For example, in our question on placement options for owners of manufactured homes when the park in which they live closes, we identified categories, such as a (1) a neighboring park, (2) private land, and (3) lack of space to move. We then identified the states that provided a response fitting with each category and totaled the number of states in each category. We also used this method to compare installation programs across eight states based on our interviews with state regulators. To describe the proposed changes to the Title I Manufactured Home Loan program, we reviewed current and proposed FHA regulations and legislation. Our review of proposed legislation included Senate Bills 2123 (109th Congress, 2005) and 3535 (109th Congress, 2006) and House of Representative Bills 2803 (109th Congress, 2005) and 4804 (109th Congress, 2006), House of Representative Bill 2139 and Senate Bill 1741 from the 110th Congress in 2007. To assess the potential costs and benefits of the proposed changes to the Title I program, we interviewed FHA officials, FHA lenders, Ginnie Mae officials, and officials from federal and other lending programs, such as Fannie Mae and Freddie Mac, the U.S. Department of Agriculture Rural Housing Service and the Department of Veterans Affairs, community banks, industry and consumer groups, and a rating service. In addition, we interviewed officials from HUD’s Inspector General Office. To learn about risk-mitigation practices, we also reviewed policies and procedures from programs that provide financing for manufactured homes at the above agencies and reviewed relevant literature. A few industry officials also provided information on loan performance for their manufactured home loan portfolio. We also conducted an analysis using different scenarios that incorporated assumptions of risk for manufactured housing lending to illustrate the potential benefits and costs of the proposed legislation. We incorporated various risk factors unique to manufactured home lending (such as site location and loss mitigation practices of lenders), as well as other commonly used predictors of loan performance such as credit scores, into a model to illustrate ways in which these key factors might affect the performance of manufactured housing loans and, thus, how variation in these key factors might affect potential gains and losses to FHA’s General Insurance Fund. Our estimates relied on assumptions concerning a few key inputs such as level of default risk, net recovery rate of lenders, and insurance premiums. See appendix II for a more detailed description of our scenario analysis methodology. We also analyzed FHA data, housed in the F-72 database, on the manufactured home loan program. We used these data to review loan performance from 1990 to 2005, the size of the units purchased, and the states in which the loans were originated. We also used the data to generate demographic information on FHA Title I borrowers. However, FHA only began collecting demographic data in 2004, so our analysis was limited to the period from June 2004 through April 2007. In addition, we could not assess where the manufactured homes were placed and the credit scores of the borrowers because FHA did not collect these data. We assessed the reliability of the F-72 database by reviewing information about the data, performing electronic data testing to detect errors in completeness and reasonableness, and interviewing knowledgeable officials regarding the quality of the data. We determined that the data were sufficiently reliable for the purposes of this report. Finally, we reported information provided by HUD using 2005 Home Mortgage Disclosure Act data on manufactured housing and the number of personal property loans originated by the FHA Title I program compared with the rest of the market. We also assessed the data reliability of this output and the computer program used to extract the information and determined the data were sufficiently reliable for our purposes. To gain an understanding of the effects of the proposed changes to the Federal Housing Administration’s (FHA) Title I Manufactured Home Loan program, we developed an approach that could illustrate potential effects of the changes on the program. Our model of different scenarios used assumptions to illustrate the importance of various risk factors unique to manufactured home lending (such as site location and loss mitigation practices of lenders), as well as other commonly used predictors of loan performance, such as credit scores. For instance, the ability of the owner of a manufactured home to build equity may be limited when the land is leased, which also often increases the risks associated with the loan. If a borrower with a home on leased land were to default, lenders could face higher costs and lower recoveries (relative to site-built homes) in trying to repossess, move, and resell the personal property. We developed a model to illustrate some of the ways in which these key factors may affect the performance of home-only manufactured housing loans, and, thus, how variation in these key factors may affect potential gains and losses to the FHA’s General Insurance Fund, which is supported by insurance premiums and used for several FHA insurance programs, including the Title I program. Based on examining some loan performance data from manufactured home lenders and discussions with officials with substantial manufactured housing lending experience, we identified some important characteristics of the performance of home-only manufactured housing loans. Our estimates rely on assumptions concerning a few key inputs: annual prepayment rates, annual default rates (which vary over different time intervals), and the net recovery rate (which measures the portion of the loan balance recovered by the lender in cases of default). Further, because FHA has not yet developed its risk-based pricing criteria for the proposed legislative changes, we made different assumptions about the level of up- front mortgage insurance premiums and periodic insurance premium payments based on the amounts discussed in the proposed legislation. By varying the default rate, loss recovery, and premium rate assumptions, we were able to generate a variety of loan performance and recovery scenarios, and illustrate in a very general way the potential for gains and losses to FHA General Insurance Fund that characterize each scenario. In the absence of available data on the credit of FHA borrowers and the location of the homes (owned or leased land), we attempted to benchmark these scenarios based on the experience of FHA’s Title I program since 1990 and of non-FHA personal property manufactured housing loans. In terms of FHA Title I experience since 1990, while the number of loans originated dropped significantly from the early to mid-1990s, cumulative defaults expressed as a percentage of originated loans did not fall below 10 percent from 1990 to 2002 and have exceeded 25 percent in 8 of the 13 years (see fig. 18). However, loans from 2003 to 2006 may not be reflective of the default experience because they are recent loans and lending industry officials explained that the peak default period for these types of loans generally occurs from the third to the fifth year. In terms of non-FHA loan performance, cumulative losses typically have been above 15 percent for loans originated between 1997 and 2001. The scenarios incorporate assumptions based on factors such as annual default rates for different yearly intervals, loan interest rates, and loan terms. Once we established these parameters, we factored in additional assumptions and variations for the net recovery rate of the lender and an insurance premium schedule for the borrower based on discussions with lending industry officials on possible default scenarios, recovery outcomes, and possible legislative changes regarding FHA’s upfront and annual premiums. The discussion below provides more detailed information on our assumptions. Assumptions on Annual Default Rates. We characterized the peak period of default as years 3 through 5, and we described the default experience in years after this peak period in terms of a percentage of the default rate assumed to hold during the peak period. In general, and based on our discussions with lenders and others, we assumed that default rates in years after the peak period would be 75 percent of what they were during the peak period. In the high loss scenario, we assumed that the peak period default rate also held in years 6 through 9 before dropping to 75 percent of the peak value. Our assumptions about default rates reflect an important characteristic of home-only manufactured housing loans: Even after years of loan amortization, a borrower may not have enough equity in the home to avoid a default in the face of adverse financial conditions. We present three variations of default: a low default experience, a moderate default experience, and a high default experience. In general, the low default experience would reflect conditions in which borrowers possessed good credit quality, lenders used high quality underwriting requirements, and lenders’ security interests were well protected in terms of those factors that are associated with the preservation of value, such the placement of the home (owned land versus leased land) and installation. The high default experience would reflect conditions in which borrowers are of poorer credit quality, and collateral values and lenders’ security interests are also poorer (see fig. 19). Assumptions Based on Annual Prepayment Rates. We assumed that prepayments were constant at 4 percent per year. Modest changes in this level did not lead to much difference in our results. Based on our discussions with lenders and others, we believe manufactured home- only loan borrowers were not as likely as other homeowners to prepay in the face of favorable refinancing opportunities. As a result, some of these loans default in later years, but they also continue to generate annual insurance premiums. Additional Scenario Assumptions. Using the prepayment rates and default rates that we selected, we calculated the value of claims in a given year as the (unpaid) principal balance due in that year based on an amortization schedule relating the selected interest rate and loan term. Based on assumed prepayment and default patterns, we calculated cumulative defaults and losses, expressed as a percentage of the original loan balance, losses, and insurance premiums paid by year. We also calculate the present value of FHA’s share of losses and the present value of annual insurance premiums. Assumptions on the Net Recovery Rate of Lenders. To provide variations in our analysis, we make different assumptions on the lenders’ ability to recover losses when a loan defaults. Based on discussions with industry officials, we assume lenders that have a strong recovery program (which may include a good network of dealers who resell manufactured homes) may have a net recovery of 50 percent per claim. Those lenders who have moderate net recovery are assumed to receive 33 percent of the claim, and those lenders with a low net recovery may receive 25 percent of the claim. Assumptions on the Insurance Premiums. Insurance premiums may include an up-front payment and annual payment. FHA has not yet developed its proposed risk-based pricing for potential FHA Title I Manufactured Home Loan borrowers. However, several bills introduced in Congress suggests the up-front annual insurance premiums would not exceed 2.25 percent and the annual insurance premium would be 1 percent of the annual unpaid principal balance of the loan. For our analysis, we assumed two different potential up-front premium amounts: the highest up-front premium was 2.25 percent of the original loan amount and the lowest up-front premium was 1 percent of the original loan amount. We also assume two different annual premiums; the highest annual premium was defined as 1 percent of the declining loan balance and lowest annual premium was defined as 0.5 percent of the declining loan balance. In addition to the contact named above, Andy Finkel (Assistant Director), Steve Brown, Tania Calhoun, Nadine Garrick, Phil Herr, Alison Martin, John Mingus Jr., Marc Molino, Tina Paek, and Barbara Roesmann made key contributions to this report.
Pending legislation to the Federal Housing Administration's (FHA) Title I Manufactured Home Loan program would increase loan limits, insure each loan, incorporate stricter underwriting requirements, and set up-front premiums. GAO was asked to review (1) selected characteristics of manufactured housing and the demographics of the owners; (2) federal and state consumer protections for owners of manufactured homes; and (3) the potential benefits and costs of the proposed changes for borrowers and the federal government. In addressing these objectives, GAO analyzed select Census data; researched federal laws and laws in eight states; interviewed local, state, and federal officials; and analyzed various scenarios that might affect Title I program costs. According to 2005 American Housing Survey data, most manufactured homes (factory-built housing designed to meet the national building code) were located in rural areas in southern states, and most were occupied by lower-income owners rather than renters. Although the market for new manufactured homes declined substantially from 1996 to 2005, buyers increasingly bought larger homes and placed them on private property rather than in manufactured home parks. In addition, some states are experiencing park closures, with the properties being converted to other uses. Overall, manufactured homes can be an affordable housing option, with monthly housing costs lower than for other housing types. Owners of manufactured homes generally have more consumer protections if their homes are considered real rather than personal property, but protections provided by laws in the states GAO examined vary. Consumer protections extending to lending and settlement processes for personal property loans are not as broad as those for real property loans (mortgages). Also, delinquent Title I borrowers can be subject to repossession or foreclosure, but the consumer protections for repossession are often less extensive than those for foreclosure. State laws give owners of manufactured homes on leased land varying levels of notice, protection, and compensation related to length of leases, rent increases, evictions, and park closures. According to some FHA and lending officials, potential benefits of the proposed changes for borrowers include loans big enough to buy larger homes and more financing as more lenders participate in the program. The program insured about 24,000 loans in 1990 but only about 1,400 loans representing $54 million in mortgage insurance in 2006. While the changes could benefit borrowers, according to FHA and the Congressional Budget Office, the potential costs could expand the government's liability. To gain an understanding of the effects of the proposed changes, GAO presented various scenarios. Although risk factors unique to manufactured home lending (such as placement on leased land) as well as commonly used predictors of loan performance (such as credit scores) are associated with default risk, these data were not available. Instead, GAO modeled different variations of borrower default risk and other factors (such as premiums and lender recovery) that were based on the experience of FHA loans to illustrate how variations in these key factors could affect potential gains and losses to FHA's General Insurance Fund. The analysis suggests that in all instances where borrowers had medium or high default risk, the fund would experience a loss. However FHA has not articulated which borrowers would be served, how the loans would be underwritten and priced under a risk-based structure, or collected data on credit scores and land ownership type. FHA explained that among other reasons, it had not done so because the Title I program was currently a low-volume program. As a result, the effects of the proposed changes are unclear.
BIE’s mission is to provide Indian students with quality education opportunities starting in early childhood. Its Indian education programs derive from the federal government’s trust relationship with Indian tribes, a responsibility established in federal statutes, treaties, and court decisions. Students attending BIE schools generally must be members of federally recognized Indian tribes, or descendants of members of such tribes, and reside on or near federal Indian reservations. About one-third of BIE schools serve students from the Navajo Nation. BIE schools are primarily funded through Interior, but like public schools, they also receive annual formula grants from Education. Like state educational agencies that oversee public schools in their respective states, BIE administers and oversees the operation of these Education grants, including grants through two Acts: the Elementary and Secondary Education Act of 1965 (ESEA) and the Individuals with Disabilities Education Act (IDEA). Title I, Part A of ESEA (Title I)—the largest funding source for kindergarten through grade 12 under ESEA—provides funding to expand and improve educational programs in schools with students from low-income families and may be used for supplemental services to improve student achievement, such as instruction in reading and mathematics. BIE schools receive IDEA funding for special education and related services, such as physical therapy or speech therapy, for children with disabilities. BIE has access to some detailed, real-time expenditure data for BIE- operated schools since they are operated directly by BIE. For example, BIE has direct access to these schools’ costs for transportation, instruction, and operations. Access to these data enables BIE officials to closely track BIE-operated school spending. However, certain costs for administrative services to operate BIE schools, such as procurement or human resources, are not performed or tracked by BIE. As we reported in September 2013, BIE is part of the Office of the Assistant Secretary- Indian Affairs (Indian Affairs), and Indian Affairs performs many administrative functions to support BIE-operated schools that a school superintendent’s office or school district typically would. However, Indian Affairs does not currently identify all its costs to support BIE schools, despite our 2003 report in which we recommended that it do so, in accordance with federal accounting standards. Meanwhile, the Tribally Controlled Schools Act of 1988 limits the financial information that most tribally-operated schools are required to submit to BIE. Tribal school grantees must complete an annual report that includes, among other things, an annual financial statement reporting revenue and expenditures as well as a financial audit in accordance with the Single Audit Act of 1984. This law, as implemented by the Office of Management and Budget, requires a financial audit of grantees who expend at least $500,000 in federal grants and other assistance in a fiscal year. These audits are commonly called “single audits.” The audits are carried out at the end of a school’s fiscal year and are conducted by independent auditors who are contracted by the grantee. They include both the entity’s financial statements and the records of spending of federal grant awards for each program. Auditors determine whether the grantee met the compliance requirements listed in the Office of Management and Budget’s Circular No. A-133 Compliance Supplement for each program. Auditors also report on the entity’s internal control over compliance for these programs and report identified control deficiencies or noncompliance in the single audit report. It is the grantee’s responsibility to follow up and take corrective actions on the audit findings. Auditors also must follow up on findings from past years’ audits, as reported by the grantee. According to Indian Affairs’ policy manual, BIE has several responsibilities that pertain to single audit reporting, including ensuring that audits are completed and reports submitted; providing technical advice and counsel to grantees as requested; and issuing a management decision on audit findings within six months after receipt of the audit report. BIE is under Indian Affairs, and the BIE director is responsible for the direction and management of education functions, including the formation of policies and procedures, supervision of all program activities, and approval of the expenditure of funds for education purposes. BIE has a central office in Washington, D.C.; a major field service center in Albuquerque, New Mexico; three regional offices (one in the east and two in the west, including one serving only schools in the Navajo Nation); and 22 education line offices located on or near Indian reservations, 17 of which currently have responsibilities for financial oversight. On June 13, 2014, the Secretary of the Interior issued an Order restructuring BIE. The reorganization is to occur in two phases, with the first phase becoming operational before the start of the 2014-15 school year. The second phase is anticipated to be operational by the end of the 2015-16 school year. Additionally, the Order states that Interior will strengthen and support the efforts of tribal nations to directly operate BIE schools. At the time of our review, several offices were responsible for oversight of BIE school expenditures (see fig. 1). BIE’s local education line offices have been responsible for providing oversight and technical assistance to both BIE- and tribally-operated schools. Education line office administrators are the lead education administrators for these offices and have responsibilities similar to school district superintendents for BIE-operated schools. Additionally, education line office administrators serve as grant officers to tribally-operated schools, allowing or disallowing costs questioned in BIE schools’ single audits. BIE’s Division of Performance and Accountability (Performance and Accountability), located in Albuquerque, administers and oversees Education-funded programs for BIE schools and develops strategies to improve academic achievement. Staff are responsible for overseeing BIE- and tribally-operated schools’ IDEA and ESEA programs. BIE’s Division of Administration (Administration), also in Albuquerque, implements budget policies, procedures, processes, and systems for all fiscal and accounting functions for education programs and schools. According to senior BIE officials, Administration staff oversee expenditures for Interior and Education-funded programs for both BIE- and tribally-operated schools, but their main focus is on BIE-operated schools. Indian Affairs’ Office of Internal Evaluation and Assessment—staffed mainly by accountants and auditors—is responsible for providing guidance and oversight to BIE to ensure that internal controls are established and maintained. The office maintains an automated tracking system to provide Administration and line office management with information on the status of tribally-operated schools’ single audits; notifies Administration and line office management when schools have failed to submit the audits; reviews all audits submitted to Administration; and is responsible for providing technical assistance to line office administrators. This office provides these and many other services to all Indian Affairs organizations. According to the office’s director, BIE represents a very small portion of the office’s overall portfolio. All BIE schools—both BIE-operated and tribally-operated—receive almost all of their funding to operate from federal sources, namely, Interior and Education. Specifically, these elementary and secondary schools received approximately $830 million in fiscal year 2014—including about 75 percent, or $622 million from Interior and about 24 percent, or approximately $197 million, from Education. BIE schools also received small amounts of funding from other federal agencies, mostly the Department of Agriculture (see fig. 2). The largest source of funding for BIE schools is Interior’s Indian School Equalization Program (ISEP). ISEP provides funding for basic instruction; supplemental instruction, such as language development and gifted and talented programs; staffing to oversee student residences and dormitories; and food service, among other services. The second and third largest sources of funding for BIE schools are from Education’s Title I and IDEA programs, respectively. Funding under Title I accounted for about $93.2 million of the $120.9 million that BIE received in fiscal year 2014 under ESEA programs. Except for a 2-year infusion of $149 million of funding in 2009 and 2010 through the American Recovery and Reinvestment Act of 2009 (Recovery Act) and a related act, according to BIE documents, total annual funding from Interior and Education fluctuated slightly from fiscal year 2009 to fiscal year 2014. Excluding Recovery Act funding, annual funding for BIE schools increased overall from fiscal year 2009 to fiscal year 2014 by about 6 percent in nominal terms, which does not account for inflation. However, adjusting for inflation, we estimate that funding during that period actually decreased slightly, by about 1 percent. Meanwhile, for public schools, funding data from Education were generally not available for the full period from fiscal year 2009 to fiscal year 2014. According to BIE officials, very little funding for BIE schools comes from non-federal sources. BIE-operated schools received very little tribal, state, or other revenue in fiscal year 2014, according to BIE. However, some tribally-operated schools received a small amount of revenue from other sources, such as tribes. Of the schools we visited that serve tribes in four states, officials from one tribe, which operated casinos, reported contributing a small amount of funding to its schools. Unlike BIE schools, the vast majority of funding for public schools nationwide comes from state and local sources, while a relatively small proportion comes from federal sources. For example, in school year 2009-10, public schools nationwide received about 87 percent of their funding from state and local sources—43 percent from state sources and 44 percent from local sources. In contrast to BIE schools, federal funding has generally comprised about 9 percent of public schools’ funding from school year 2002-03 to 2008-09. Subsequently, in school year 2009-10, federal funding for public schools comprised about 13 percent, which was slightly higher than in previous years due in part to the Recovery Act. (See fig. 3.) For public schools nationally, Education’s Title I and IDEA programs provide the largest amounts of federal funding. The percent of federal funding of public schools varies across states and local school districts. For example, in one district we visited, which was located near BIE schools, federal funding accounted for about 35 percent of its funding, and in another district about 68 percent. A key reason for the larger amount of federal funding for these districts was their funding from a federal formula grant program known as Impact Aid. Impact Aid is intended to compensate school districts for funding losses resulting from federal activities. These funds were in addition to the districts’ Title I and IDEA funding. Average per-pupil expenditures for the 32 BIE-operated day schools were at least 56 percent higher than in public schools nationally in school year 2009-10, and were higher in the four categories of operating expenditures that we analyzed. According to our analysis, BIE-operated day schools spent an estimated average of at least $15,391 per pupil, while public schools nationwide spent an estimated average of $9,896, excluding food service. (See fig. 4). When Recovery Act spending was excluded, per- pupil expenditures for these BIE-operated schools were at least 61 percent greater than at public schools. Similarly, we found higher per- pupil expenditures at BIE-operated day schools in the categories of instruction (the largest category), transportation, facilities operations and maintenance, and administration. For example, transportation expenditures were more than twice as much at BIE-operated schools than at public schools nationwide. Unlike the national averages, per-pupil expenditures at 4 of the 16 BIE schools that we visited appeared similar to those at nearby public school districts. Like the BIE schools we visited, these nearby rural public school districts served students who were mostly Indian and low-income. Thus, certain student demographics as well as the geographic location of the schools were comparable. In one state, two tribally-operated schools spent an estimated $17,066 per pupil, and the nearby public school district spent $17,239 per pupil. In another state, per-pupil expenditures, excluding facilities operations and maintenance, were $12,972 at two BIE-operated schools and $11,405 at the nearby public school district. However, the findings from this sample are not generalizable to all BIE and nearby public schools. Also, data limitations prevented us from comparing the other 12 BIE schools that we visited with nearby public schools. Several factors help to explain higher per-pupil expenditures at the 32 BIE-operated schools relative to public schools nationwide (see table 1). Student demographics. Students in BIE schools—both tribally- operated and BIE-operated—tend to have different demographic characteristics than students in public schools nationally. These characteristics, including higher poverty rates and a higher percent of students with special needs, are among the factors yielding higher per-pupil expenditures on average, as we have noted in previous reports. In BIE schools, students tend to be from lower income households than public school students. For example, all BIE schools were eligible for Title I funding on a school-wide basis because they all had at least 40 percent of children from low income households in school year 2009-10, according to an Education study. In contrast, half of all public schools were eligible for Title I funds on a school-wide basis. In addition, BIE-operated day schools have a higher percentage of students receiving special education services than public schools nationwide, according to BIE and Education data. Students in special education generally need additional services, such as physical, occupational or speech therapy, so expenditures tend to be higher for these students. Smaller enrollment and remote location. The smaller enrollment and isolated location of many BIE schools contribute to their higher expenditures. In school year 2009-10, BIE schools, including BIE- operated schools, generally had smaller average enrollment than public schools nationwide. For example, over 85 percent of BIE- operated day schools (28 of 32 schools) had fewer than 300 students according to our analysis, as compared to about 30 percent of public schools nationwide. Along with smaller size, the remote location of many BIE schools hinders their ability to benefit from economies of scale, as we have previously reported. For example, expenditures for facilities operations and maintenance may be higher, to the extent that many schools are geographically dispersed and unable to share facilities personnel, supplies, or services. Other factors may help explain higher per-pupil expenditures at BIE- operated schools, including the higher costs of instruction, transportation, facilities operations and maintenance, and administration. Instructional expenditures were greater at BIE-operated schools than at public schools nationwide, due partly to teacher salaries. Teacher salaries were typically higher in BIE-operated schools than in public schools, according to BIE salary schedules and an Education study of school year 2011-12. For example, in school year 2011-12, the yearly base salary for a teacher with a bachelor’s degree and no experience at BIE-operated schools was $39,775, compared to averages of $35,500 at public schools nationwide and $33,200 at rural public schools nationwide. The higher salaries at BIE-operated schools are mainly due to a federal law which requires BIE to pay teachers using the same pay scale as teachers at Department of Defense schools located overseas. The law requires similar pay, in part, to help recruit and retain teachers at BIE-operated schools. Although our analysis of per-pupil expenditures focused on BIE-operated schools, teacher salaries at tribally-operated schools—which set their own salary amounts—generally were lower than teacher salaries at public schools. According to an Education study of school year 2007-08 (the most recent study that included tribally-operated schools), all BIE schools—two-thirds of which were tribally-operated—paid teachers, on average, an estimated base salary of $41,500. By contrast, public schools nationwide paid, on average, an estimated base salary of $49,600, including an average of $44,000 in rural public schools. According to Education’s study of school year 2007-08, teachers at BIE schools and public schools nationwide generally appeared to have similar years of experience, but a slightly lower percentage of teachers at BIE schools had a master’s degree as their highest degree compared to public school teachers nationally. In addition, BIE schools had lower median student-teacher ratios than public schools in school year 2009-10, which makes per-pupil spending higher. The median ratio was 11.4 students per teacher at all BIE schools and 15.5 students per teacher at public schools, according to an Education study. This lower ratio mainly relates to the smaller average enrollment at BIE schools than at public schools, which limits BIE schools’ ability to benefit from economies of scale. BIE reported that the remote locations of its schools and poor road conditions contributed to higher transportation expenditures than those of public schools, among other factors. For example, daily round-trip bus routes averaged about 80 miles for all BIE schools and ranged from a few miles to more than 320 miles, according to BIE officials. Also, the poor road conditions, including dirt or unimproved roads, lead to wear and tear on vehicles transporting students, according to BIE budget documents and administrators at schools we visited (see fig. 5). By contrast, slightly more than half of public schools are located in cities or suburbs, and therefore may be unlikely to face such long bus routes or poor road conditions. As transportation expenditures increase, BIE reported that schools are trying to contain costs in various ways, such as combining or reducing bus routes or using more appropriate vehicles for the number of students or road conditions. Facilities-related expenditures may be higher at BIE-operated schools than public schools because a greater proportion of schools that BIE manages are in poor condition than public schools. BIE reported that about one-third of its schools were in poor condition as of the end of school year 2012-13, while two-thirds were in fair or good condition. At one school we visited that BIE identified as in poor condition, the main classroom building was built in the 1930s, and several other school buildings no longer had functioning heat. Conversely, an estimated 3 percent of public schools nationwide with permanent buildings reported that they were in poor condition in school year 2012-13, excluding temporary, or portable, buildings. BIE-operated schools may also have higher facilities-related expenditures than public schools because some BIE schools are responsible for funding the operations and maintenance of many services that public school districts typically are not, such as water and sewer service or trash and snow removal. For example, at another BIE school we visited in poor condition, the 50-year old building lacked a sprinkler system and was located in a remote area. As a result, school officials said that they provide a fire truck and supplies for the school as well as for the local vicinity, contributing to higher operations and maintenance expenditures. As with other categories of spending, higher per-pupil spending on administration at the 32 BIE-operated day schools relates partly to their smaller average enrollment than enrollment at public schools. As we reported in 2003, expenditure categories for administration are not comparable for BIE and public schools in light of their different organizational structures. At BIE-operated day schools in school year 2009-10, about $1,502 of the $15,391 in per-pupil expenditures, or about 10 percent, was for administration. Of the $1,502 per pupil for administration, $1,149 was spent on school-level administration, such as principals’ salaries; $47 was allotted for the school board; and an estimated $306 was spent by BIE education line offices. These offices provide academic and financial guidance, similar to a school district’s office of the superintendent. Meanwhile, public schools in school year 2009-10 spent per pupil, on average, $1,088, or about 10 percent, for administration: about $568 for school-level administration, $186 for the school board and the office of the superintendent, and $334 for additional administration, such as procurement, finance, and payroll. However, the costs for administering BIE-operated schools are greater than $1,502 per pupil since our estimate excludes the costs of the administrative services that Indian Affairs provides to BIE. Indian Affairs performs many administrative functions to support BIE-operated schools that a school or school district typically would, such as procurement, finance, and human resources. However, Indian Affairs does not currently identify all the costs associated with supporting these schools. These costs and services are to shift from Indian Affairs to BIE by the end of school year 2015-16, according to a June 2014 Order from the Secretary of the Interior. Line office administrators are integral to overseeing school expenditures, but the number of full-time administrators who oversee school expenditures decreased from a total of 22 in 2011 to 13 in 2014, 6 permanent and 7 acting. The workload of the vacant administrator positions has been absorbed by the remaining administrators. Besides key responsibilities in academic and other aspects of school administration, line office administrators’ responsibilities include: overseeing and evaluating schools’ expenditures; ensuring that tribally-operated schools submit their annual single audits and providing them to the Federal Audit Clearinghouse and BIE’s Administration office; approving or disapproving costs that are questioned in tribally- operated schools’ single audits, and working with tribes and schools to ensure that any single audit findings are resolved; and collaborating with BIE’s Performance and Accountability officials to identify schools that may need additional oversight because of failure to comply with Education program requirements, such as Title I and IDEA fiscal requirements. Beginning in 2012, when BIE officials announced plans to reorganize and possibly close line offices, staff began resigning and retiring. Staff departures increased further when Indian Affairs announced Voluntary Early Retirement Authority and Voluntary Separation Incentive Payments in 2013. From fiscal years 2011 to 2014, funding for management of BIE schools, including for line office staff, decreased by about 38 percent. As a result of having fewer line office administrators, the remaining administrators have been assigned additional responsibilities and their workload has increased, making it challenging for them to conduct needed oversight. For example, administrators in two of the three line offices we interviewed reported that over the last 2 years their responsibilities have increased significantly, and they are now overseeing and providing technical assistance to a greater number of schools. One official told us that in addition to the 7 schools he usually monitors, he is responsible for 11 additional schools that two other line offices had been responsible for overseeing. According to an administrator in a third line office, located in the Navajo Nation, responsibilities in that office have also changed significantly, and currently one line office administrator is responsible for overseeing 65 Navajo schools’ financial matters, including the tribally-operated schools’ annual single audits. With increased responsibilities and travel budgets constraints, line office administrators in all three BIE regions reported that conducting site visits and maintaining regular interaction with school personnel is difficult. In particular, line office administrators said, among other things, it is challenging to obtain and review school documents, develop working relationships with school officials, and provide technical assistance to schools, all of which are activities that contribute to oversight. For example, one line office administrator reported that she reviews hard- copy special education files during site visits to ensure that funding is used to provide students with needed services. She explained that she is only able to access these files when visiting schools. According to a high-ranking BIE official, increased line office responsibilities can include working outside of one’s assigned region in geographically dispersed areas. For example, a line office administrator in North Dakota also serves as the acting administrator for a line office in Tennessee and is responsible for overseeing and providing technical assistance to schools in five states—Florida, Louisiana, Maine, Mississippi, and North Carolina. Similarly, a line office administrator in New Mexico and another in Arizona are responsible for overseeing school expenditures to schools in Montana. The challenges line office administrators confront in overseeing school expenditures are further exacerbated by a lack of financial expertise and training. For example, although line office administrators make key decisions about single audit report findings, such as whether funds are being spent appropriately, they are not auditors or accountants. Additionally, the administrators responsible for the three line offices we visited said that they did not have the financial expertise to understand the content of single audits. Although Indian Affairs has offered occasional webinars pertaining to single audits in the last two years, some line office administrators have not received this training. In fact, the majority of BIE’s acting line office administrators (4 out of 7), have not attended these webinars, though they may especially need training on their new job responsibilities. Additionally, no BIE staff have attended any in-person single audit training since at least 2011. Although BIE officials reported in September 2013 that eliminating line office positions would create problems if appropriate plans are not developed and implemented, the agency has not developed a plan for meeting its workforce needs. According to BIE officials, they have not hired new line office administrators for 4 years due to budget cuts. As a result of these cuts, Interior had a hiring freeze for about 1.5 years and currently has a cap on hiring. Although BIE can request waivers to hire additional staff, it has not done so for line office positions in this fiscal environment. BIE has a responsibility to employ the staff needed to adequately oversee school expenditures. To the extent that line office administrators are currently stretched thin without an effective plan for realigning staff responsibilities, BIE’s efforts to fulfill its mission of educating Indian students are impaired. BIE’s approach to staffing line offices runs counter to federal internal control standards and key principles for effective strategic workforce planning. Federal internal control standards state that staff need to possess and maintain a level of competence that allows them to accomplish their assigned duties. Additionally, key principles for workforce planning include: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. The appropriate number and geographic distribution of employees can further support organizational goals and strategies and enable an organization to have sufficient, adequately trained staff. Absent a comprehensive workforce plan that responds to staff shortages, BIE is jeopardizing its ability to oversee both BIE-operated and tribally-operated schools to ensure their funds are being used for their intended purpose to provide students a quality education. Strategic workforce planning for oversight of school expenditures is especially important for BIE as it implements a recently announced restructuring. On June 13, 2014, the Secretary of the Interior issued an Order to restructure BIE using existing resources for the 2014-15 school year. The Order emphasizes the importance of improving academic achievement at BIE schools and creates new offices in BIE as well as transferring responsibilities for certain school support services from other Indian Affairs offices to BIE. However, the Secretarial Order does not address any issues regarding the oversight of school expenditures. For example, although line offices are discussed within the context of restructuring and will still be responsible for providing technical assistance to schools, it is unclear whether they will continue to have a role in monitoring school expenditures. Further, the Order is silent on the status and roles of Performance and Accountability and the Administration office, the two other BIE entities currently involved in monitoring school expenditures. In addition, the Order does not address concerns about the shortage of staff responsible for overseeing school expenditures or gaps in their expertise and training. In late June 2014, a senior BIE official said that the status of Performance and Accountability is still unclear and that the Administration office will become BIE’s School Operations Division, responsible for acquisition and grants, among many other duties. Although the Secretarial Order requires the Assistant Secretary-Indian Affairs to complete an analysis of new work functions and develop workforce plans, it is unclear if the workforce plans will address whether BIE has an adequate number of staff with the required knowledge and skills to oversee BIE school spending. Moreover, the timeframe for completing these activities is limited. Specifically, the Order directs BIE and the Assistant Secretary-Indian Affairs to complete these activities in less than 3 months—from mid June through August 2014. As of mid- August, high-ranking officials reported that BIE was unlikely to meet these timeframes. This compressed timeframe makes it extremely challenging to conduct effective workforce planning. According to our prior work, as part of workforce planning an agency should first identify current and future needs, including the appropriate number of employees, the key competencies and skills for mission accomplishment, and the appropriate deployment of staff across the agency. After completing these activities, the agency should then create strategies for identifying and filling gaps. While BIE is restructuring to improve operational support to schools and enhance school performance, it must also ensure that federal funding to the schools is used for its intended purposes. Therefore, it is critical to ensure that there are sufficient staff assigned to oversee school expenditures and that they have the appropriate expertise and training to do so. If BIE does not develop a detailed workforce plan for staff overseeing school expenditures, its ability to effectively provide technical assistance to tribes and build tribal capacity to operate schools will be impaired. BIE oversees tribally-operated school spending primarily through information provided in the schools’ annual single audits; however, it does not have a process in place for consistently documenting actions it takes to respond to audit findings. For example, when we requested documents describing actions BIE had taken in response to audit findings at six tribally-operated schools, BIE could only produce documents for one of the schools. BIE officials told us they could not produce documents for the remaining five schools because the line office staff assigned to oversee the schools’ spending had either retired or were unavailable. As a result of not having a process to document what steps BIE took to address the issues identified at the schools, BIE staff newly assigned to oversee these schools’ expenditures have no way to determine how their schools’ audit findings were resolved. Consequently, they must start anew when following up on future single audit findings and other oversight activities. Without a mechanism to ensure sustained oversight of school expenditures, BIE is not well-positioned to improve financial management at these schools and ensure they are spending Interior and Education funds to provide Indian students a quality education. According to internal control standards, oversight activities should ensure that the findings of audits are promptly resolved and they should be recorded in an accurate and timely manner. In addition, documents should be readily available for examination. Similarly, BIE does not have a mechanism in place to ensure that tribally- operated schools’ single audits are shared with all of the officials responsible for overseeing school expenditures. Specifically, officials from Performance and Accountability—who are tasked with overseeing schools’ spending for Education programs—reported they do not have access to schools’ single audits, which include information about their use of ESEA and IDEA funds. Performance and Accountability officials told us they have requested access to single audits from Administration officials in the past, but they told us their requests were not honored because of technological challenges related to BIE’s computer network. However, Indian Affairs staff said that if BIE sends them a list of individuals who should have access to the single audits, they can accommodate their requests. Performance and Accountability officials reported that access to these audits would give them a more comprehensive view of tribally-operated schools’ finances. This would help them determine whether schools are complying with Education program requirements, such as whether IDEA funds are being used to provide special education services for students with disabilities. Additionally, the officials said the audits would help them identify tribally-operated schools that have large unexpended IDEA balances from year to year and may not be using funds to provide needed services to students. To the extent that Performance and Accountability staff do not have access to needed documents, it hinders BIE’s ability to hold others accountable for use of government resources. According to internal control standards, program managers need both operational and financial data to meet their goals for accountability for effective and efficient use of resources. BIE does not have written procedures for overseeing BIE-operated and tribally-operated schools’ Indian School Equalization Program (ISEP) expenditures, which is particularly significant since ISEP is the schools’ largest source of funding—about $402 million in fiscal year 2014. Although BIE Administration and line office staff are responsible for overseeing these expenditures, they do not have complete or detailed procedures to guide their work. For example, they do not have a specific procedure for how and when they should conduct desk audits of schools or on-site monitoring visits. Further, the staff lack written procedures— such as a uniform list of documents and other items to review—to help guide their oversight efforts and ensure they are consistent across schools. According to federal internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit their agency’s operations and to ensure they are built into and an integral part of operations. Instead of using written procedures to oversee expenditures, Administration officials told us they use a case-by-case approach and rely on the “practice history” of staff. Absent written oversight procedures, BIE is at risk of obtaining and relying on varied and incomplete information to determine whether its schools are using funds as intended. Written procedures are particularly important for BIE’s oversight of tribally- operated schools since BIE plans to increase their number in coming years and the Tribally Controlled Schools Act limits the amount of information that tribes must submit to BIE. If funds are not used for their intended purpose, students in BIE schools may not receive the services they need, such as tutoring and other forms of instruction. In contrast to the lack of written procedures for overseeing ISEP expenditures, Performance and Accountability staff have developed draft procedures for overseeing schools’ IDEA spending and are in the process of developing them for ESEA-funded programs (see table 2). The draft procedures for overseeing IDEA funds include detailed written instructions for conducting both fiscal desk audits of schools and on-site oversight reviews. For example, the desk audit instructions include a list of all the financial documents that staff will review, outline a detailed process for determining the risk level of each school for meeting IDEA fiscal requirements, and describe follow-up actions to be taken depending on the schools’ level of risk. In addition to not having comprehensive written procedures for oversight, Administration officials reported they do not consistently document the results of monitoring, including on-site visits, which is particularly important given recent staff departures. In addition, BIE officials reported that a line office administrator had solely conducted monitoring activities in fiscal years 2010 through 2012 at a school with a history of poor management of ISEP funds. When we asked BIE for documentation of the administrator’s activities, we were told that he had retired and that no records were available. In addition, BIE does not have a standard form or template for their staff to document their monitoring activities. Further, when we asked about recent site visits they conducted, Administration officials provided conflicting information. In a May 1, 2014 written response to a question that we posed, Administration officials reported that they had not conducted any recent site visits. However, in an interview on May 2, they stated that a visit had occurred within the previous month. If the Administration office had a process in place to consistently document site visits, the office would have had definitive information on the recent visits that they conducted. BIE also lacks a process to prioritize oversight activities based on a school’s risk of misusing ISEP funds. Rather than using a risk-based approach to guide how it uses limited monitoring resources, BIE officials use an ad hoc approach. BIE officials said they typically wait for suggestions from line office staff to determine which schools to monitor on-site, and they typically visit schools in close proximity to their offices. This approach is consistent with BIE’s tendency to rely on its practice history, rather than standard oversight processes. While this approach may, in isolation, allow BIE to identify some schools that need additional oversight or assistance, federal internal control standards call for a more comprehensive analysis, and they state that management needs to comprehensively identify risks. A line office administrator we interviewed reported that given his increased workload in recent years and travel budget constraints he has been unable to visit schools for which he is responsible and whose single audits have identified serious financial weaknesses. For example, he has been unable to visit one school—which is at a significant distance from his office—whose audit found $1 million or more in questioned costs over multiple years. According to written responses to questions we posed, BIE indicated that monitoring visits to all schools have been infrequent over the past few years due to budget constraints and travel restrictions. Given these budget constraints as well as staff shortages, it is critical for BIE to use its existing resources to target schools for monitoring that have been identified in their single audits as at high risk of misusing federal funds. Internal control standards state that the scope and frequency of monitoring should depend primarily on the assessment of risks and the effectiveness of ongoing monitoring procedures. Absent a risk-based approach, federal funds may continue to be provided to schools with a history of financial weaknesses. For example, as of July 2014, single audits of tribally-operated schools identified $13.8 million in costs that were not allowable at 24 schools, but we found minimal follow- up by BIE with the schools that had misused funds or did not adhere to program or legal requirements. Further, it appeared that BIE took little action to incentivize schools to adhere to financial and program requirements. According to Circular A-133, issued pursuant to the Single Audit Act, if an auditee, such as a tribally-operated school, has not completed corrective action, the agency awarding the auditee’s grant should give it a timetable for follow-up. Further, according to Indian Affairs’ policy manual, if a tribally-operated school fails to take the action necessary to resolve findings in its single audit, BIE should offer technical assistance to the school if the audit findings remain unresolved. In serious situations, BIE may also designate the school as “high risk,” which subjects the school to additional monitoring and restricted payments. If BIE determines that there has been gross negligence or mismanagement in the handling or use of funds, it may initiate procedures to assume control of the school. In our review of single audits of six tribally-operated schools whose audits in fiscal years 2010 through 2012 identified poor financial management of major program funds, we found some occasions where BIE took little follow up action to address serious financial problems that auditors identified at the schools. For example: In 2010, auditors found that one school used $1.2 million of its ISEP funds to provide a no-interest loan to a local public school district, which the auditors found is not permitted by law. In 2011 BIE asked the school to repay the more than $1 million and is still negotiating with the school to recoup the funds. Nevertheless, BIE could not provide us with records that line office staff visited the school or provided additional oversight of school expenditures or technical assistance during the past 3 years. In the meantime, a subsequent audit found that the school has not taken any steps to ensure that its funds are not misused again and that it is still at risk of comingling federal funds with the local school district funds. However, BIE has continued to provide the school its full ISEP funding without additional monitoring or restrictions on those payments. At another school that received an adverse audit opinion 3 years in a row (fiscal years 2010 to 2012), auditors found that the school’s financial statements had to be materially adjusted by as much as nearly $1.9 million. As a result, they found that the financial reports that the school submitted to BIE for those 3 years were unreliable. Despite these problems, BIE was unable to provide any evidence that it took action to increase oversight of the school’s expenditures to ensure that it could accurately account for the federal funds it spends. More recent examples of financial issues at BIE schools include: In February 2014, during our review of the Federal Audit Clearinghouse database, we found that one school had not submitted its required single audits since fiscal year 2010. Officials with BIE and the Office of Internal Evaluation and Assessment were not aware of this, and acknowledged that it was an oversight on their part when we brought it to their attention. According to Indian Affairs, the Office of Internal Evaluation and Assessment subsequently updated this information in its system. However, despite a legal requirement for tribally-operated schools to submit annual single audits, BIE has not imposed sanctions on this school. A March 2014 single audit found that a tribally-operated school lost $1.7 million in federal funds that were illegally transferred to an off- shore bank account. According to the school’s audit, about $500,000 of these funds were subsequently returned to the school’s account, making the net loss around $1.2 million. Interior reported in October 2014 that this incident was “…a result of cybercrimes committed by computer hackers and/or other causes” and that the school is working with tribal authorities to investigate the incident. Nevertheless, the school’s single audit stated that its inadequate cash management and risk assessment procedures contributed to the incident and stated that the school must strengthen these procedures. Additionally, a school administrator reported that the school held at least another $6 million in federal funds in a U.S. bank account. As of June 2014, BIE had not yet determined how the tribe accrued that much in unspent federal funds. Several BIE officials told us that some tribes operating schools have a history of placing Interior and Education funds in savings accounts rather than using them to provide educational services as intended. One official cited an instance of a school accumulating over $900,000 in unspent IDEA funds that were intended to be used to provide special education services to students with disabilities. According to a June 2014 study commissioned by the Secretaries of the Interior and Education, the Tribally Controlled Schools Act provides an incentive for schools not to spend funds they receive from Interior and Education because schools are permitted to retain certain unspent federal funds, and they can also place any current or unspent funds in interest- bearing accounts before they are spent. Based on certain audit information that was available, the study found that approximately 80 tribally-operated schools have retained a total of about $125 million in unspent funds that have accumulated over time. According to the study, BIE has contributed to this problem by not implementing policies that encourage schools to fully utilize their funding and discourage them from planning to have unspent funds. The study stated that BIE and Education should provide tribes with technical assistance and practical guidance about the activities for which federal funds are allowed to be spent under current laws. Education is currently developing this type of guidance and, according to Indian Affairs, it is working with Education to develop this type of guidance. Without using a risk-based approach to monitoring and having written procedures in place to oversee school expenditures, the amount of misused and unspent funds at BIE schools is likely to grow. BIE’s mission is to provide the students attending its schools a quality education. Most of these students are low-income, many have special needs, and their performance and graduation rates are below those of public school students nationwide. These students are in need of instructional, supplemental, and special education services. To ensure that students have access to these services, it is imperative that BIE accurately track and oversee school expenditures to make sure that federal funds are used for their intended purposes. However, BIE staff responsible for oversight currently face multiple challenges, and their monitoring of expenditures is weak. High staff turnover and reductions in the number of education line office administrators as well as their lack of expertise and training have left them struggling to adequately monitor school expenses. In addition, the limited information that BIE officials have about tribally-operated school expenditures is not shared with the BIE staff responsible for overseeing schools’ use of Education funds. Further, BIE does not utilize tools, such as written procedures and a risk- based approach to monitor school expenditures that would help improve the effectiveness of its oversight. Within this environment, BIE is not well positioned to oversee schools that may be at high risk for misusing federal funds. Strong oversight of school expenditures is especially critical since the number of tribally-operated schools is likely to increase over the next several years and limited expenditure information is available from these schools. To this end, BIE’s current restructuring efforts provide officials with a unique opportunity to develop a workforce plan and adopt processes and procedures that create an improved control environment. These actions will help ensure that students receive the quality education and services they deserve. We recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to take the following four actions: Develop a comprehensive workforce plan to ensure that BIE has an adequate number of staff with the requisite knowledge and skills to effectively oversee BIE school expenditures. Develop a process to share relevant information, such as single audit reports, with all BIE staff responsible for overseeing school expenditures to ensure they have the necessary information to identify schools at risk for misusing funds. Develop written procedures for BIE to oversee expenditures for major programs, including Interior’s Indian School Equalization Program. These procedures should include requirements for staff to consistently document their monitoring activities and actions they have taken to resolve financial weaknesses identified at schools. Develop a risk-based approach to oversee BIE school expenditures to focus BIE’s monitoring activities on schools that auditors have found to be at the greatest risk of misusing federal funds. We provided a draft copy of this report to the Departments of the Interior and Education for review and comment. Education chose not to provide comments. Interior’s comments are reproduced in appendix II. Interior also provided technical comments that we incorporated in the report as appropriate. In its overall comments, Interior stated that our report’s findings and recommendations will be beneficial as the Department moves forward with facilitating improvements in BIE schools and improving the oversight of school spending. For example, Interior noted that strong financial stewardship is essential to achieving its goal to transform BIE into an organization that provides direction and services to tribes to help them attain high levels of student achievement. Interior concurred with our recommendation to develop a workforce plan to ensure that BIE has an adequate number of staff to effectively oversee BIE school expenditures. Interior said that as part of BIE’s restructuring, it plans to increase the number of staff whose main responsibility is to oversee school expenditures. Additionally, the Department stated that new training will be developed for current BIE staff, including line office administrators, mainly in areas related to budget and finance issues. While these are positive steps, we believe it is important for Interior to monitor the manner in which BIE implements these plans to ensure it has staff with the skills needed to provide appropriate oversight of school expenditures. Interior also concurred with our recommendation to develop a process to share relevant information with all BIE staff responsible for overseeing school expenditures. According to Interior, BIE has implemented a process to ensure that relevant information is made available via email to all cognizant officials, including information on schools’ questioned costs and cash deficits. The Department also stated that BIE will convene monthly meetings with high level officials to discuss these issues. While we believe that these are all positive actions, it is important for Interior to ensure that BIE periodically review its approach to disseminating information to make sure that its practices are effective. Interior partially concurred with our recommendation to develop written procedures for BIE to oversee expenditures for its major programs. Interior stated that, in addition to draft Title I and IDEA oversight procedures, BIE also has written procedures in place related to the Indian School Equalization Program (ISEP). Specifically, Interior said that BIE’s newly established School Operations Division has a standard review process in place that is currently being utilized, referred to as the Education Management Assessment Tool. We reviewed this document and found that its main focus is on reviewing Education Line Offices’ operations, management, and program functions. However, none of the BIE officials we interviewed were aware of the document. While oversight of line offices’ operations is important, the specific focus of our recommendation is on BIE’s oversight of school expenditures, including for ISEP, its largest program. As we noted in our report, although BIE Administration and line office staff are responsible for overseeing these expenditures, they do not have complete or detailed procedures to guide their work. For example, they do not have a schedule for conducting audits, a uniform list of documents to review, or procedures for staff to document the actions they have taken to resolve financial weaknesses identified at schools. We believe that rigorous oversight of school expenditures is critical to ensure that schools are spending federal funds for their intended purposes. Lastly, Interior concurred with our recommendation on the need to develop a risk-based approach to oversee BIE school expenditures. To that end, Interior said that BIE, once its restructuring has been fully implemented, will improve coordination with the Office of Internal Evaluation and Assessment on matters regarding financial accountability. In addition, Interior stated that BIE will assign appropriate grant monitoring protocols to schools and tribal grantees based on questioned costs and other risk factors identified in their single audits. The Department further stated that it will use single audits to designate certain schools and tribes as high risk, along with those that are unresponsive to repeated requests for corrective action plans. While we believe that these are all positive steps, it is critical that Interior ensure that BIE follow through to effectively implement these changes. We are sending copies of this report to relevant congressional committees, the Secretaries of the Interior and Education, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to assess (1) how sources of funding for Bureau of Indian Education (BIE) schools compare to those of public schools; (2) how BIE school expenditures compare to those of public schools; (3) the extent to which BIE has the staff and expertise needed to oversee school expenditures; and (4) the extent to which BIE’s processes are adequate for ensuring that school funds are spent appropriately. To address these objectives, we used multiple methodologies. We reviewed relevant federal laws and regulations. Additionally, we reviewed agency documents from BIE, the Office of the Assistant Secretary-Indian Affairs (Indian Affairs), and the Department of Education (Education), including budget justifications, published reports, annual financial audit reports, and other documents. We also conducted interviews with officials at the Department of the Interior (Interior) in Indian Affairs, which includes BIE, as well as with officials at Education. A key component of our methodology was a quantitative analysis of data from various federal sources. For sources of funding, we reviewed budget justifications as well as other financial information. For expenditures, we analyzed agency data systems and followed a methodology similar to our approach in our 2003 report on this topic. Further, we conducted site visits to select BIE schools and nearby public school districts. For BIE oversight of school expenditures, we additionally analyzed data from a federal database on results of annual financial audits. Our methodology is described below in further detail. We conducted this performance audit from January 2014 through November 2014 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To address how sources of funding for BIE schools compare to those of public schools, we analyzed the federal funding sources of BIE schools as well as the various sources of funding for public schools. For BIE schools, we reviewed budget justifications of Indian Affairs—which includes BIE—and related information. For public schools, we reviewed data from Education’s National Center for Education Statistics, which maintains the Common Core of Data, including the School District Finance Survey’s final F-33 data. This survey collects financial data from all school districts in the 50 states and the District of Columbia. School districts provide information on funding, expenditures, and enrollment for a particular school year. The data that we analyzed generally applied to school year 2009-10, consistent with our analysis of expenditures, for which the data were from the most recent year available at the time of our analysis. According to Education documents, Education conducted several tests of the data and performed follow-up to ensure that data were complete and accurate. In addition, we performed tests and removed a few additional districts from our analysis to increase the reliability of the data. Thus, we reviewed Education’s data through (1) electronic data tests; (2) a review of related documents about the data and the systems; and (3) interviews with knowledgeable agency officials, and we found that the data were sufficiently reliable for the purposes of this report. We also interviewed officials from BIE, Indian Affairs, and Education. To examine how BIE school expenditures compare to those of public schools, we analyzed expenditures of BIE schools and those of public schools. We began by assessing the reliability of three databases: Interior’s Federal Financial System was the source of expenditure data for BIE schools; BIE’s Native American Student Information System provided enrollment data for BIE schools; and Education’s Common Core of Data, School District Finance Survey’s final F-33 data was the source of data on expenditure and enrollment for public school districts. We focused on school year 2009-10, since it was the most recent year for which data were available from Education at the time of our analysis. We assessed the reliability of data for this school year from these systems in several ways, including (1) electronic data tests, (2) reviews of related documentation about the data and the system, and (3) interviews with knowledgeable agency officials. Based on our assessment, we determined that the relevant data were sufficiently reliable for our purposes. BIE and DOD are the only two federal entities that directly oversee the management and operation of elementary and secondary schools. For this review, we were unable to compare expenditures at schools operated by the Department of Defense (DOD) schools with those of BIE or public schools because DOD’s financial system containing school year 2009-10 data was not sufficiently reliable for our purposes of comparing amounts across departments. Specifically, DOD’s Standard Accounting and Reporting System-Field Level data system contains expenditure data for DOD schools that we determined were not sufficiently reliable for our purposes, based on reviews of related documentation about the data and the system, as well as interviews with knowledgeable agency officials. To compare expenditures at BIE and public schools, we developed estimates of per-pupil expenditures at the national level. Although Interior’s Federal Financial System does not use categories that directly align with Education’s Common Core of Data, we were able to compare broad categories, based on reviews of documentation, professional judgment and interviews with agency officials. We focused our analysis on operating expenditures of elementary and secondary schools in a typical year. We classified BIE expenditure data and Education expenditure data into four broad categories: (1) instruction, (2) transportation, (3) facilities operation and maintenance, and (4) administration. Then, we derived per-pupil expenditures by dividing expenditure amounts by student enrollment. Due to our focus on operating expenditures, we generally excluded certain amounts such as expenditures for food service, capital (i.e., construction), and debt service. Student enrollment is measured differently by BIE and Education. BIE measures enrollment as an average daily membership, or an average over the course of the year. We derived student enrollment by adding average daily membership for students in school year 2009-10 who are eligible Indian students under the Indian School Equalization Program (ISEP) and those who are not eligible Indian students but still attend BIE schools, such as children of BIE staff. Because we used student enrollment in one particular year and we included students who are not eligible under ISEP but still attend BIE schools, our enrollment data may differ somewhat from published data from Interior. Meanwhile, Education’s Common Core of Data measures student enrollment by taking a head count at one point in time in the fall. Consistent with the methodology of our 2003 report, we generally analyzed expenditures of BIE-operated day schools. Thus, we analyzed day schools and excluded boarding schools, which have a boarding as well as an academic component. We also limited our analysis to all 32 day schools that were BIE-operated in school year 2009-10. Of 185 BIE schools and dormitories, BIE-operated day schools represent about 17 percent of the schools and dormitories. As with our 2003 report and Education’s publication on its School District Finance Survey for school year 2009-10, we did not adjust expenditures for geographic variation. We excluded schools that were tribally-operated because BIE has limited expenditure data for tribally-operated schools. In our 2003 report, we recommended, among other things, that the Secretary of the Interior should consider entering into negotiations with tribal entities to acquire detailed expenditure data for the schools they manage for comparison with public schools. However, Indian Affairs, in consultation with the school boards of tribally-operated schools, concluded that this recommendation would not be feasible with the school boards’ existing systems. BIE headquarters officials noted that current law limits the type or format of information that the agency can require from tribes, including data on expenditures. For example, one statutory provision requires a tribal grantee to submit an annual financial statement reporting revenue and expenditures and an annual financial audit, but the grantee has discretion on the specifications of the cost accounting of the statement. Additionally, we reviewed seven reports of Education data on the characteristics of public and BIE schools. These characteristics helped to explain differences in per-pupil expenditures between public and BIE schools. We generally reviewed the most recent version available of these studies. Estimates obtained from reports that used sample data collected using random probability samples are subject to sampling error. We present estimates along with 95 percent confidence intervals to show the precision of those estimates. Further, we interviewed officials at BIE, Indian Affairs, and Education, and reviewed agency documents, including guidance, internal correspondence, and agency-sponsored management studies. We supplemented our analysis of nationwide data with site visits to BIE schools and nearby public school districts. The information from these site visits is not generalizable to all BIE schools or public school districts nationwide. For this and our prior report on BIE management challenges, we visited 16 BIE schools—6 BIE-operated and 10 tribally- operated—as well as 4 nearby public school districts. We conducted site visits to BIE schools that serve the Oglala Sioux Tribe in Pine Ridge, South Dakota; the Mississippi Band of Choctaw Indians; the Navajo Nation in Arizona, New Mexico, and Utah; and various Pueblo Indians in New Mexico, where we interviewed school administrators and observed school conditions. We also requested expenditure data from tribally- operated schools because BIE has limited expenditure data from schools operated by tribes. Further, we interviewed administrators at public school districts in close proximity to the BIE schools we visited. We selected locations to visit to reflect an array of BIE schools that varied in administration type, school and tribal size, and location. For the nearby public school districts, we selected school districts that were in close proximity and also had similar student demographics in terms of the percent of students who were Indian. Our expenditure analysis of schools we visited was limited to four BIE schools—two BIE-operated and two tribally-operated schools—along with two public school districts, all of which were located in New Mexico and South Dakota. We compared expenditure data for the schools we visited with the amount of funding they received to assess the reliability of the expenditure data. We determined that the data for these four BIE schools and two public school districts were sufficiently reliable. We excluded data from the remaining 12 BIE schools and 2 public school districts we visited because we did not receive any expenditure data or were not able to obtain reliable expenditure data for both the BIE schools and the nearby public school districts. In addition, we excluded expenditure data for some BIE schools we visited because the schools were boarding schools and, therefore, their expenditures would not be comparable to those of nearby public schools. To determine the extent to which BIE has the staff and expertise needed to oversee school expenditures, we analyzed BIE data on the number and location of staff currently responsible for overseeing these expenditures as compared to the number of staff in prior years and the location of schools for which they are responsible. We also reviewed BIE data on the types of positions that are currently vacant and interviewed cognizant BIE officials. In addition, we reviewed Indian Affairs records on the training on single audits provided since 2012 to line office administrators and staff. We also interviewed officials at BIE and Indian Affairs who are responsible for overseeing school expenditures about their skill sets and the types of training they have received on overseeing federal grants. To determine the extent to which BIE’s processes are adequate for ensuring that school funds are spent appropriately, we reviewed available oversight tools, including protocols for desk and onsite audits and tracking data. In addition, we reviewed the results of all tribally-operated schools’ single audits for a 3-year period (fiscal year 2010 to fiscal year 2012) in the Federal Audit Clearinghouse. For more analysis of single audit findings, we selected for further review the single audits of six recipients of Indian School Equalization Program (ISEP) funding, the largest funding source of BIE schools. We selected these recipients based on audit opinions of their compliance with program requirements. Specifically, we examined all the single audits in which the auditor found that the school’s records for ISEP compliance either contained significant departures from generally accepted auditing standards or were in a condition that made it impossible for them to be assessed in accordance with generally accepted auditing standards. We also examined the single audits of select tribally-operated schools based on information from BIE officials. Although our non-generalizable sample focused on schools with poor financial management, these schools may have needed the greatest oversight and technical assistance from BIE to correct the audit findings and improve their financial management. We assessed the reliability of the Federal Audit Clearinghouse data on single audits by (1) reviewing existing information about the data and systems that produced them, and (2) obtaining information from U.S. Census Bureau officials knowledgeable about the database. We determined that these data were sufficiently reliable for the purposes of this report. We also obtained documents about other select BIE schools. Additionally, we interviewed officials in Education, Indian Affairs’ Office of Internal Evaluation and Assessment, as well as BIE’s Division of Performance and Accountability, Office of Administration, and education line offices. In addition to the contact named above, Elizabeth Sirois (Assistant Director), Ramona L. Burton (Analyst-in-Charge), Lucas M. Alvarez, Kathleen M. Peyman, and Matthew Saradjian made key contributions to this report. In addition, key support was provided by Hiwotte Amare, James E. Bennett, Deborah Bland, Holly A. Dye, Alexander G. Galuten, LaToya Jeanita King, Ashley L. McCall, Sheila McCoy, Kimberly A. McGatlin, Jean L. McSween, Deborah A. Signer, and John S. Townes.
BIE, within Interior's office of Indian Affairs, oversees 185 schools, serving about 41,000 students on or near Indian reservations. BIE's mission is to provide students with a quality education. However, BIE student performance has been consistently below that of public school students, including other Indian students. Given these challenges, GAO was asked to review BIE school funding and expenditures. This report examines how funding sources and expenditures of BIE schools compare to those of public schools; the extent to which BIE has the staff and expertise needed to oversee school expenditures; and the extent to which BIE's processes for oversight adequately ensure that school funds are spent appropriately. GAO reviewed relevant federal laws, regulations and agency documents, and analyzed BIE and public school expenditure data for school year 2009-10, the most recent year for which data were available. GAO also visited select BIE schools and nearby public schools in four states based on location, school and tribal size, and other factors. Unlike public schools, Bureau of Indian Education (BIE) schools receive almost all of their funding from federal sources. BIE directly operates about a third of its schools, and tribes operate two-thirds. According to BIE data, all of the BIE schools received a total of about $830 million in fiscal year 2014: about 75 percent from the Department of the Interior (Interior), 24 percent from the Department of Education (Education), and 1 percent from the Department of Agriculture and other agencies. Public schools nationwide receive about 9 percent of their funding from federal sources and rely mostly on state and local funding. GAO found that some BIE schools spend substantially more per pupil than public schools nationwide. Specifically, GAO estimates that the average per pupil expenditures for BIE-operated schools—the only BIE schools for which detailed expenditure data are available—were about 56 percent higher than for public schools nationally in school year 2009-10, the most recent year for which data were available at the time of GAO's review. Several factors may help explain the higher per pupil expenditures at BIE-operated schools, such as their student demographics, remote location, and small enrollment. BIE lacks sufficient staff with expertise to oversee school expenditures. Since 2011, the number of BIE full-time administrators located on or near Indian reservations to oversee school expenditures decreased from 22 to 13, due partly to budget cuts. As a result, the 13 administrators have many additional responsibilities and an increased workload, making it challenging for them to provide effective oversight of schools. Additionally, these administrators have received less training in recent years. Further, the three administrators GAO spoke with said they do not have the expertise to fully understand the school audits they are responsible for reviewing. BIE's staffing of these positions runs counter to federal internal controls standards and key principles for effective strategic workforce planning, such as having sufficient, adequately-trained staff. Without adequate staff and training, BIE will not be able to ensure that school funds are spent appropriately. BIE's processes for oversight do not adequately ensure that funds are spent appropriately. BIE lacks written procedures for how and when staff should monitor school spending and does not use a risk-based approach to prioritize how it should use its limited resources for oversight. Instead, BIE told GAO that it relies primarily on ad hoc suggestions by staff regarding which schools to oversee more closely. Meanwhile, some schools have serious financial problems. Notably, external auditors identified $13.8 million in unallowable spending at 24 schools as of July 2014. Further, in March 2014, an audit found that one school lost about $1.2 million in federal funds that were improperly transferred to an off-shore account. Without written procedures and a risk-based approach to overseeing school spending—both integral to federal internal control standards—there is little assurance that federal funds are being used for their intended purpose to provide BIE students with needed instructional and other educational services. Among other things, GAO recommends that Indian Affairs develop a workforce plan to ensure that BIE has the staff to effectively oversee school spending and written procedures and a risk-based approach to guide BIE's oversight of school spending. Indian Affairs generally agreed with GAO's recommendations.
The close integration and coordination of ground combat forces and bombing operations is essential to the exercise of lethal combat power on the modern battlefield. As depicted in figure 1, military doctrine describes targeting in terms of a cyclical process composed of six basic phases. During this process, the joint force commander identifies the objectives for military operations in support of the national objectives for the conflict and any key limitations on operations—such as procedures for limiting civilian collateral damage. The commander’s guidance then drives the subsequent phases of the targeting cycle to include identifying and analyzing potential targets and resources available to attack them, obtaining formal permission for the strike, executing the strike, and then assessing strike effectiveness and any need to reattack. The success of this process is highly dependent on the speed and quality of interaction among the people and systems conducting the various activities at each phase. Trained ground control personnel must interact quickly and covertly with manned and unmanned aircraft, electronic sensors and space-based satellite imagery systems, or other intelligence, surveillance, and reconnaissance mechanisms to spot the target and accurately mark its location. Accuracy depends upon the ability of the ground personnel to locate themselves, the target, and any friendly forces nearby and accurately judge the distance between each. These elements must be able to communicate the targeting information to command and control centers that coordinate the actions of a variety of analysts and others who assess the situation, plan the strike, communicate the information back to the ground personnel, and analyze the effectiveness of the attack. DOD is working to improve the interaction of these elements by using network-centric operating concepts. The term “network-centric” is used to describe a broad class of approaches to military operations that are enabled by networking the force. DOD’s approach involves developing the sensors and other technologies to provide pervasive oversight of the battlefield, and then linking them to all elements of the war-fighting force through communications and other technologies. This allows the various elements of the force to develop a shared situation awareness, a shared knowledge and understanding of commanders’ intent, and the ability to rapidly process and analyze information. The belief is that these capabilities will increase combat power by better synchronization of weapons effects in the battle space and greater speed in command decision making. This strategic change is being accompanied by an array of changes to doctrine, tactics, organization, and training to integrate the network-centric concept into DOD’s culture. Advances in networking the force are being complemented by advances in precision weapons. Precision-guided weapons provide precise control of bombs through the use of electrical equipment that help guide the weapon in flight. These capabilities provide an advantage in accuracy over conventional weapons that do not have the ability to adjust their trajectory while in flight. The transition from unguided to guided weapons has accelerated rapidly since Operation Desert Storm in 1991 where unguided weapons were the norm. For example, as shown in figure 2, only about 8 percent of the weapons used during Operation Desert Storm were guided, while this number increased to about 68 percent in Operation Iraqi Freedom. Operations in Kosovo, Afghanistan, and Iraq provided a variety of conditions for the development of these network-centric approaches. For example, operations in Kosovo were conducted primarily by air over rugged and undeveloped mountainous terrain. There were no direct attacks by large massed ground forces, and the cover of forests and villages allowed enemy forces to easily conceal their location. Similarly, Afghanistan’s rugged and mountainous terrain and large number of caves and bunkers also provided numerous opportunities to conceal Taliban and al Qaeda forces. Light infantry and special operations forces were the primary U.S. forces on the ground, with aircraft as their sole means of fire support. In contrast, the terrain in Iraq is characterized by mostly broad plains with mountainous regions along the borders and a largely desert climate posing threats from dust and sand storms. Initial operations pitted large massed forces against one another in more traditional ways of fighting. However, the conduct of U.S. operations also relied heavily on small, dispersed groups of special operations forces operating on battlefields with no clear front and rear lines, as enemy forces blended in and out of urban populations. With the exception of Kosovo, these conflicts were also characterized largely by pronounced U.S. air superiority, with little threat from enemy air defenses. During Operation Enduring Freedom in Afghanistan, enemy air defenses were so limited that U.S. forces were able to win near total air supremacy early in the war. Similarly, air superiority was not a concern during Operation Iraqi Freedom. Prior to the conflict, military forces had been working to set the conditions for air dominance through more than 3 years of bombing. During Operation Allied Force in Kosovo, however, there were significant concerns about enemy air defense systems, causing bombing operations to be carried out at high altitudes to avoid the threat. Moreover, access to overseas bases was problematic in all three of these operations, straining logistical support systems and complicating military operations. For example, this lack of forward air basing infrastructure within effective fighter range of land-locked Afghanistan required U.S. forces to rely primarily on carrier-based aircraft to provide strike power during the operations. These operations were also conducted in an environment of pronounced concerns about limiting collateral damage to civilian populations and infrastructure. Adversaries attempted to exploit collateral damage in an effort to gain public sympathy for their cause and cast a negative light on U.S. operations. U.S. forces adjusted the target selection and approval process to minimize collateral damages, calling on senior leaders to approve target selection in some cases. However, attempts to minimize collateral damages can also create tension with military objectives and complicate bombing operations. DOD officials cite improvements in networking the force and in the use of precision weapons as primary reasons for the overwhelming combat power demonstrated in recent operations. Network-centric operating concepts, particularly in surveillance and command and control systems, have created unprecedented battlefield situation awareness for commanders and their forces, yet the full extent to which operations have been affected is unclear. Technologies enhancing the use of precision- guided weapons have also provided military commanders with increased flexibility and accuracy in bombing operations. Network-centric operating concepts have improved battlefield situation awareness for commanders and their forces. DOD has indicated that technological improvements in information-gathering systems allow commanders an unprecedented view of the battlefield. Such improvements provide for greater shared situation awareness, which, in turn, speeds command and control. However, while it appears that enhanced networking has speeded operations, the full impact on operations is unclear because of the absence of detailed measures of their effects. DOD officials and reports cite a variety of technological and other improvements in intelligence, surveillance, and reconnaissance mechanisms as basic to the unprecedented ability of commanders and forces to observe and monitor the battlefield. For example, surveillance aircraft orbiting the battlefield—such as the E-3 Sentry airborne warning and control system (for detecting enemy air and naval activities and directing friendly fighters), the RC-135 and EP-3 aircraft (for locating enemy radar and other electronic emissions), the E-8C Joint Surveillance Target Attack Radar System (for detecting enemy ground activity), and the U-2 (for high altitude, wide-area surveillance)—have been outfitted with smaller, lower cost, and higher quality sensors and radars, improving their ability to detect the enemy and provide high resolution imagery of the battlefield. Another key is the development of unmanned aerial vehicles, such as the Predator and the Global Hawk used extensively in Afghanistan and Iraq. These aircraft carry cameras, sensors, or even weapons and are used to constantly circle over the battlefield and provide continuous live surveillance of the enemy without risk to human pilots. The Predator is remotely piloted by operators on the ground, while the Global Hawk is self-piloted, controlled by a preprogrammed onboard computer that controls the aircraft from takeoff to landing. These systems interact with ground personnel, such as special operations forces or specially trained combat controllers, to locate and precisely mark targets and assess bombing results. Technological advances now enable these controllers to identify a target and determine its precise location by using laser designators, which may be connected to a hand- held Global Positioning System receiver. Reports have cited the use of these technologies interacting with aircraft flying at high altitudes to avoid enemy air defenses, combined with new tactics for integrating special operations forces with conventional units, as a breakthrough capability. During Operation Enduring Freedom in Afghanistan, special forces teams used these technologies linked to piloted aircraft or unmanned Predator drones—providing live battlefield video directly to nearby AC-130 gun ships—to attack small groups of al Qaeda and Taliban fighters and other fleeting targets. The Joint Forces Command report on missions conducted during Operation Iraqi Freedom also cited the capabilities provided by these advances. DOD officials indicate that the improved ability to share a broad view of the battlefield and communicate quickly with all elements of the force has compressed the time required for analysis and decision making in bombing operations, increasing lethality significantly. Before an actual strike may begin, information on potential targets generally must be routed through command and control centers where the target information is analyzed; information is exchanged between a myriad of commanders, analysts, and other elements of the force; and final approval for the strike is granted. The ability to network these elements and rapidly exchange information during this process—central to combat effectiveness—-is enabled by improvements in computing power, digital communications, and satellite data links in recent years. For example, increases in computing power have enabled the networking of computers from a multitude of personnel and locations, with near instantaneous exchange of information through techniques such as file sharing, video conferencing, and e-mailing. These capabilities are enhanced by digital communications, which can be faster and more accurate than voice communication. For example, digital systems allow a ground controller to input the coordinates and other information needed for an attack into a computer and transmit this information instantly to computers on board an aircraft or at command and control centers. The ability to rapidly exchange information generated by these networks has some limitations. For example, the Defense Science Board recently reported that despite the successes in Afghanistan, there were difficulties in passing coordinates from ground personnel to aircraft overhead due to the unreliability and limited range of secure communications and the absence of digital communications systems. As a result, instead of instantaneously transmitting targeting information across digital systems, ground controllers were required to pass Global Positioning System coordinates by voice radio to aircrews. Aircrews then had to write the coordinates on boards held on their knees, and then read them back for confirmation. Once confirmed, aircrews needed to load the coordinates by hand into the weapons, a process requiring as many as 51 computer keystrokes and subject to error. The ability to rapidly exchange information generated by these networks is also dependent upon satellite data links and availability of bandwidth. Bandwidth is a term used to describe the rate at which information moves from one electronic device to another—usually expressed in terms of bits per second—-over phone lines, fiber optic cable, or wireless telecommunications systems. Increases in this capacity have enabled the rapid exchange of large visual and data files, giving commanders increasing access to more real-time surveillance, intelligence, and targeting information than in previous conflicts. For example, according to the Joint Forces Command, U.S. forces in Iraq had access to 42 times the bandwidth available in Desert Storm. However, despite this improvement the Army and others have experienced continuing shortages in the availability of bandwidth. Despite some limitations, technological advances have also made it possible to manage conflicts from command centers located far away from the battlefield, using so-called reach back techniques, where some commanders, analysts, and other support personnel remain at home stations and communicate with commanders at the battlefield using the networks described above. For example, during Operation Allied Force in Kosovo the center used to direct air operations was located in Vicenza, Italy. Images from Predator aircraft located over the battlefield in Kosovo were transmitted by satellite communications to a ground station in England, then by fiber optic cable to a facility in the United States for analysis. The information was then transmitted to the District of Colombia area, where it was up-linked to a satellite and transmitted back to controllers aboard an airborne command and control aircraft in Kosovo. The information was then provided to controllers, who provided the information to aircraft poised to strike the targets (see fig. 3). The reach back technique not only provides for more centralized control of operations but also provides the opportunity for savings in logistical support requirements. For example, in previous conflicts, command centers—comprised of perhaps 1,500-2,000 commanders, analysts, and others, and the equipment needed to do their jobs—had to be transported into the war zone. This requirement created major demands on transportation and other support elements during the early phases of an operation and reduced the air and sealift available to move soldiers and supplies. Now, networking permits commanders at the battlefield to reach back to analysts and other staff located thousands of miles away for guidance and support. During operations in Afghanistan and Iraq, the joint forces commander remained at U.S. Central Command headquarters in Tampa, Florida, while air operations were directed from centers in Saudi Arabia and Qatar. Electronic map displays at these locations provided near continuous tracking of ground, air, and naval units, with Predator drones and other aircraft feeding live video imagery from the battlefield. While it seems clear that networking has speeded operations, the full impact on operations is unclear because of the absence of detailed measures of their effects. For example, U.S. Central Command officials told us that while the targeting process was slowed by requirements for additional command approvals for some targets, they believed that overall, the targeting process was more efficient during Operation Iraqi Freedom than previous conflicts. However, statistics were not maintained by the Central Command to measure this improvement. Several experiments and exercises provide some information on this issue. For example, according to a recent DOD report to Congress, an Army exercise in 1997 using computer simulation to determine the war-fighting effectiveness of a digitized division-sized force found that the time required to process calls for fire was reduced from 3 minutes to 30 seconds and that the planning time for attacks at the company level was cut from 40 to 20 minutes. Similarly, a 1998 experiment involving networked Army helicopter units and a range of Navy and Marine units to counter a simulated attack by North Korean special operations boats found that the average decision time was reduced from 43 to 23 minutes and that shooter effectiveness measured in kills per shot was increased by 50 percent. DOD also reported that a special Air Force project in the mid- 1990s found that F15-C fighter aircraft networked with digital communication packages increased their success rate in air-to-air combat exercises by more than 150 percent over aircraft equipped with voice only communications. The increase was attributed to the benefits of shared situation awareness provided by the digital networks. According to DOD’s report, pilots with voice only communications can only see enemy aircraft in the radar zone directly in front of their aircraft, and they cannot see supporting friendly aircraft to their rear. To attack enemy aircraft, the voice only aircraft must hold verbal conversations with supporting aircraft to understand the entire combat picture and develop a coordinated attack plan. However, fighter aircraft networked with digital communications are able to see the entire picture of enemy and friendly support aircraft locations on their screens without the need for time-consuming conversations. According to the report, this shared mental picture of the battlefield reduces the cognitive load on the pilots, enabling them to concentrate more on the battle, react quicker, and make synchronized, mutually reinforcing decisions with their supporting aircraft. These examples provide illustrations of the potential effects of network- centric operations. However, DOD’s report acknowledges that evidence of its full impact is limited and often scattered, rather than focused and systematic. Having a fuller, more precise understanding of the effects of network-centric operations is important because of its potential impact on issues such as the ability to model the speed of combat operations and the resources needed to support them. An official from DOD’s Office of Force Transformation told us that the office is conducting a series of case studies of operations in Afghanistan and Iraq and exercises at the National Training Center and elsewhere to better understand these effects. The development of technologies such as laser-guided and Global Positioning System-guided precision weapons has provided military commanders with increased flexibility and accuracy in bombing operations, making them increasingly lethal. Precision weapons reduce limitations created by poor weather and visibility, enable bombing operations from higher and safer altitudes, and allow aircraft to be used in new ways. For example, bombing operations have always faced limitations due to targets being obscured by bad weather or other limitations on visibility. Traditionally, the process of locating and marking a target was dependent on the controllers’ ability to see the target, judge distances, and accurately find coordinates using paper maps. Targeting objectives were marked using smoke grenades, flares, or other such techniques. However, Global Positioning System- guided bombs help reduce these limitations by providing an all-weather delivery capability enabled by satellite-aided navigation. The system is a constellation of 24 orbiting satellites emitting continuous navigation signals that handheld receivers on the ground can translate into time, location, and velocity of targets. Time can be calculated to within a fraction of a second, location to within 100 feet, and velocity within less than a mile per hour. According to DOD officials, laser-guided bombs— which follow a narrow beam of pulsed energy trained on a target by aircraft or operators on the ground—are more precise than Global Positioning System-guided bombs, and have a capability for attacks on moving targets that Global Positioning System-guided bombs do not. However, laser-guided bombs are subject to limitations presented by rain, clouds, or other visibility conditions since there must be a clear line of sight between the laser designator and the target. From Operation Allied Force to Operation Enduring Freedom, DOD increased the use of Global Positioning System-guided bombs by about 45 percent and decreased the use of laser-guided bombs by about 32 percent. Conversely, between Operations Enduring Freedom and Iraqi Freedom, DOD decreased the use of Global Positioning System-guided bombs by about 13 percent and increased the use of laser-guided bombs by about 10 percent. DOD officials stated that there is a need for both laser-guided and Global Positioning System-guided bombs in today’s environment and that the use depends on such factors as nature of the target being struck, theater of operations, weather conditions, availability, and cost. Frequently used guided munitions such as the Global Positioning System Guided Bomb Unit 31 have a unit cost of about $21,100 to $28,400, depending on the version used, while laser-guided bombs such as the Guided Bomb Units 10/12/16 have unit costs ranging from $14,600 to $23,000. Unguided bombs such as the 500-pound MK-82 and 1,000-pound MK-83 have unit costs ranging from about $2,000 to $8,700. The use of such precision-guided weapons has also made it possible for bombing operations to be conducted from higher altitudes. This tactic helps limit the threat to pilots and aircraft from air defense systems and ground fire, and provides Global Positioning System-guided bombs with more time to acquire and guide on the satellite signals. In Kosovo, where air defense systems posed a significant threat to U.S. forces, pilots conducted bombing missions at an altitude that was beyond the effective reach of the Serbian enemy air defense systems. According to DOD officials, they have continued to use this tactic in Afghanistan and Iraq because of its effectiveness. In addition to high altitude operations, Global Positioning System-guided weapons, such as the joint direct attack munition used extensively in Iraq, can also be launched miles away from a target. The operator can essentially launch the weapon and proceed on to the next target, relying on the navigation system to guide the weapon to impact. While conducting bombing operations from high altitudes is much safer for pilots and aircraft, it also becomes more difficult to properly identify and distinguish certain targets, particularly when the enemy employs denial and deception tactics. For example, during Operation Allied Force, Serbian forces made tank decoys out of milk cartons and artillery pieces out of stovepipes. DOD has also increased the numbers of aircraft capable of delivering precision-guided munitions, allowing military planners to use aircraft in new and different ways. According to a recent report, only about 20 percent of U.S. aircraft were equipped with the ability to put a laser- guided bomb on the target during the first Gulf War. However, nearly every combat aircraft was capable of employing precision-guided munitions during Operation Iraqi Freedom. Bombers such as B-2s are now capable of delivering large payloads of weapons in a single strike, providing more flexibility in weapons availability. These capabilities increase the ability to deliver more precision-guided weapons during each flight. Moreover, they also increase operational effectiveness by allowing the military to reduce flights by planning to strike multiple targets during each flight, as opposed to the traditional approach of carrying out multiple flights to attack one target. Our analysis found that advances in precision weapons have improved the accuracy of bombing operations. For example, we compared data on bombing operations in Afghanistan maintained by the U.S. Central Command with data on operations in Kosovo from our classified report on Operation Allied Force. This analysis found that the percentage of attacks resulting in damage or destruction to fixed targets increased by 12 percentage points from Kosovo to Afghanistan. Further, the percentage of attacks resulting in damage or destruction to mobile targets increased by 21 percentage points. DOD officials agreed that bombing accuracy improved, and classified analyses conducted by both the Air Force and the Navy support that conclusion. According to DOD officials, there is no similar analysis of the accuracy of bombing operations during Operation Iraqi Freedom. While DOD officials agreed that precision-guided weapons have increased the accuracy of bombing operations, they stated that it is important to note that such improvements may also be influenced by other factors. For example, differences in terrain, the relative numbers of fixed versus mobile targets (which are harder to hit), and commanders’ guidance on collateral damage can all influence the accuracy of bombing operations. In addition, the experience and the training that military forces gained by near continuous combat operations since the beginning of Operation Allied Force in 1999 may also influence bombing accuracy. Such factors must be considered when interpreting bombing statistics. Despite the improvements brought about by advances in networking and precision weapons, DOD has identified a variety of barriers undermining continued progress in implementing the new capabilities-based strategy. For example, concerns were raised about shortages of digital communications, commercial satellite capacity and bandwidth, and other equipment. However, four interrelated areas stood out as key barriers to continued progress: (1) the lack of standardized, interoperable systems and equipment; (2) DOD’s continuing difficulty in obtaining timely, high quality assessments of the effects of bombing operations; (3) the absence of a unified battlefield data collection system to provide standardized measures and baseline data on the efficiency and effectiveness of bombing operations; and (4) the lack of high quality, realistic training to help personnel at all levels understand and adapt to changes in the operating environment brought about by the move to a highly networked force using advanced technologies. The lack of standardized, interoperable systems and equipment during joint operations was one of the most frequently reported problems we found during our review. According to DOD officials and reports, this long- standing problem undermines many operating systems at DOD, including systems used to provide shared situation awareness of the battlefield, battle management command and control, and damage assessments of the effects of bombing operations. For example, officials from the Joint Forces and Special Operations Commands told us that during Operation Iraqi Freedom, ground forces arrived in theater with several different, non- interoperable Blue Force Tracking systems. Blue Force Tracking systems are devices carried by friendly ground units and vehicles that continuously or periodically transmit their locations to a central database, allowing their locations to be displayed on computer screens. Since there is no joint standard for such tracking systems, the joint force commander is responsible for resolving the interoperability problems created by the use of disparate systems. To provide a common picture of the location of ground forces using these systems, commanders had to develop a number of creative solutions to bridge the differences between them and integrate them into a coherent system—requiring considerable time and effort. DOD officials also told us that the use of differing formats for processing information creates similar problems. For example, each service and unified command have their own instructions for performing operations such as reporting on the results of bombing missions. A recent DOD report found that during joint operations in Afghanistan, the Central Command received mission reports using at least 23 different formats. This created difficulty in receiving messages and required time-consuming manual data manipulation and entry. Operations in Iraq also faced similar problems. According to the Joint Forces Command report on Iraqi Freedom, the process of evaluating the effects of attacks in Iraq was beset by a lack of commonly understood operational level standards. Integration of information was undermined by groups adopting their own standards and reporting formats, resulting in difficulties in translating information and coming to a mutual understanding because they were not able to make specific comparisons between formats or to a common format. DOD has published a number of joint publications to help standardize operations in the joint environment. These publications provide general terms of reference and descriptions of processes, such as the targeting process, for use by personnel from the various services while operating in the joint environment. However, according to DOD officials, these publications do not provide enough detailed guidance, such as standardized formats for reporting mission results, for the actual conduct of operations. As a result, each unified command must develop its own implementing procedures, with no system to ensure standardization among the commands. Further, according to DOD officials, when the pace of operations increases to high levels, there is a tendency for personnel to revert to using their own familiar service procedures. We have also reported that a variety of equipment—such as reconnaissance aircraft, satellites, ground-based stations processing intelligence data, ground targeting equipment, and digital transmission systems used to transmit information between airborne and ground personnel—is not interoperable across the services. Similar to the examples cited above, the inability of these systems to operate effectively together can limit access to communications and other needed capabilities and confuse and slow targeting activities as less efficient alternatives must be used to achieve the mission. DOD recognizes that improved interoperability and standardization are central to the transformation of its forces, and is attempting to address the problem. However, the problem is complex and difficult to resolve because military operations and acquisition systems have traditionally focused on the services and the specific weapons platforms needed for their specific missions—not on joint operations with interoperable systems and equipment. DOD’s budget is organized by service and defense agencies, as we and the Defense Science Board recently reported in separate publications. Therefore, the process of defining and acquiring the right capabilities is dominated by the services and defense agencies. Joint force commanders’ views are considered in this process, but they have a difficult time competing with individual service interests that control the process. As a result, the acquisition of systems and equipment often fails to consider joint mission requirements and solutions, and there is no guarantee that fielded systems will operate effectively together. DOD is addressing the need for more interoperability and standardization in several ways. For example, DOD’s April 2003 Transformation Planning Guidance requires the commander of the Joint Forces Command to develop a plan to address DOD’s interoperability priorities. These priorities include such efforts as development of a common operational picture for joint forces; improved intelligence, surveillance, and reconnaissance capabilities; improvements to selected targeting linkages; and improved reach back capabilities. The planning guidance also requires the services and the Joint Forces Command to develop plans for achieving the desired transformational capabilities, including an identification of the initiatives taken to improve interoperability. DOD is also attempting to reform the acquisition process to align it with a new capabilities-based resource allocation process built around joint operating concepts. Instead of building plans, operations, and doctrine around individual service systems, DOD is attempting to explicitly link acquisition strategy to joint concepts to provide integrated, interoperable joint war-fighting capabilities. For example, in June 2003, the Chairman of the Joint Chiefs of Staff issued Instruction 3170.01 that established the Joint Capabilities Integration and Development System. This system provides new guidelines and procedures for joint staff to review proposed acquisitions for their contribution to joint war-fighting needs. DOD is also developing the Global Information Grid to act as the organizing framework for network-centric operations and help ensure interoperability in information operations throughout DOD. Begun in the late 1990s, this effort seeks to integrate the information processing, storing, disseminating, and managing capabilities—as well as the associated personnel and processes—throughout DOD into an integrated network. DOD’s Chief Information Officer has described this network as a private military version of the World Wide Web. The effort includes programs to develop the policies and guidance needed to implement network-centric concepts across DOD, as well as programs to provide the technological improvements needed for the success of network-centric operations. Parts of this effort, such as policy and procedural guidance, bandwidth expansion, and improvements to reach back capabilities, have begun or are in place. For example, definitions of requirements for interoperable information technology that are used in developing the Global Information Grid are cited as the authoritative guidance in the requirements determination and acquisition areas—including the Joint Capabilities Integration and Development System discussed previously. However, according to officials involved in the effort, development of the grid is still in its early stages and is planned to continue to the year 2010 and beyond. While DOD appears committed to improving interoperability, DOD officials state that such reforms require difficult cultural changes to fully succeed. However, we previously reported that various problems have undermined past reforms, including cultural resistance to change, stove- piped operations, difficulties in sustaining top management commitment (the average tenure of top political appointees is only 1.7 years), and other problems that continue to exist today. For example, in November 1997, DOD announced the establishment of the Defense Reform Initiative, which was a major effort to modernize DOD’s business processes and ignite a “revolution” in business affairs at DOD. The initiative was overseen by the Defense Management Council composed of senior defense leaders reporting to the Secretary of Defense. However, by July 2000, we reported that the initiative was not meeting its time frames and goals in a number of areas. We concluded that the most notable barrier was the difficulty in overcoming institutional resistance to change in an organization as large and complex as DOD. Moreover, the effectiveness of the Defense Management Council was impaired because members were not able to put aside their particular services’ or agencies’ interests to focus on departmentwide approaches. Similarly, cultural impediments to change were also illustrated in our March 2003 report on ground-based systems for processing intelligence data. In that report, we stated that DOD’s system for certifying their interoperability was not working effectively. In 1998, DOD began a program to reduce the number of ground-based systems that process intelligence data from various sensors and ensure that the remaining sensors are interoperable with other DOD systems. DOD requires that such information systems be certified, and to help enforce the certification process, the department set up a review panel to periodically review such systems and place those with interoperability problems on a “watch list.” However, 5 years after the program was started, we reported that only 2 of 26 systems in the program had been certified and, despite this problem, the systems had not been placed on the watch list. DOD officials cited a number of reasons for the noncompliance, including that military services sometimes allow service-unique requirements to take precedence over joint interoperability requirements. DOD strongly agreed with our recommendations to take several steps necessary to enforce its certification process. DOD’s difficulty in obtaining timely, high quality assessments of the effects of bombing operations continues to be a difficult problem to overcome. Problems with battle damage assessments have been repeatedly identified since at least Operation Desert Storm in 1991. DOD has taken some steps to address these problems, but they continue to reoccur. As a result, some DOD officials have called for approaching battle damage assessments in different ways. Reports from DOD and others have identified repeated difficulties in conducting battle damage assessments in operations in Iraq, as well as other operations dating back at least to Operation Desert Storm in 1991. Battle damage assessments are a critical component of combat operations. Slow or inaccurate assessments can result in inefficient use of forces and weapons, as targets must be struck repeatedly—-but sometimes unnecessarily—to ensure their elimination as a threat. Inadequate damage assessments also slow ground advances, as units and individuals face uncertainty about enemy capabilities, which can ultimately increase their risk of death or injury since they may have to close with the enemy to understand the conditions ahead of them. However, DOD reported that battle damage assessments during operations in Iraq could not keep up with the pace of operations and failed to provide the information needed for operational decisions. Reports on operations in Afghanistan also identified similar problems during Operation Enduring Freedom. Our report on Operation Desert Storm found that battle damage assessments during that conflict were neither as timely nor as complete as planners had assumed they would be. Battle damage assessments were performed on only 41 percent of the strategic targets in our analysis, resulting in potentially unnecessary additional strikes to increase the probability that target objectives would be met. The inability of damage assessment resources to keep up with the pace of modern battlefield operations is due to several factors. According to DOD officials, advances in network-centric operations and precision weapons have increased the speed at which targets are generated and attacked. At the same time, however, DOD does not have an occupational specialty for battle damage analysts. This results in shortages of trained analysts when resources are surged during operations, leaving unified commands to rely on untrained and inexperienced personnel brought in from other areas and trained on the job. For example, during operations in Afghanistan and Iraq, the Central Command experienced requirements for large manning increases in its battle damage assessment capability. While the command was ultimately able to increase its staff of analysts to about 60 (see fig. 4), this was only a fraction of the estimated requirement. Typically, the Central Command has about three to five full-time personnel assigned to its battle damage assessment group. Moreover, according to Central Command officials, even when they obtained personnel they were often untrained. Operations were further slowed, as these personnel were required to receive on-the-job training. Battle damage assessment training is available at both the service and joint levels. However, according to DOD officials, the absence of a formal occupational specialty for battle damage assessment means there is little incentive for personnel to seek the training. Further, even if trained, analysts are required to use the instructions of the unified command in charge of operations during actual conflicts. DOD officials told us that there is no requirement for these instructions to be standardized, making it more difficult for personnel from the services to quickly adapt to operations. Finally, according to officials, DOD does not have a comprehensive system to track personnel who have received battle damage assessment training, further exacerbating problems in quickly locating trained analysts during surge situations. In recognition of the continuing problems associated with battle damage assessments, DOD has taken some steps to address these problems. However, these attempts have been somewhat limited. For example, DOD established the Joint Battle Damage Assessment Joint Test and Evaluation program in August 2000 to investigate solutions to battle damage assessment process problems. The program was focused on assessment processes used by U.S. forces in Korea, but it also analyzed processes used in Operations Enduring Freedom and Iraqi Freedom. Program officials developed a variety of enhancements that could improve the battle damage assessment process. For example, program officials developed improvements to the processes used in Korea to standardize disparate systems and speed the flow of information between analysis and command centers. To help address analyst training problems, they developed a compact disc-based course to provide quick training for untrained personnel assigned to fill shortages of analysts during conflicts. Further, they also developed an agreement with a reserve organization to develop a core of trained battle damage assessment analysts and to have those personnel available to meet surge requirements for the Korean command. However, according to program officials, acceptance of such approaches is voluntary within DOD, and many have not been implemented outside Korea. They are trying to gain additional support for adoption of their enhancements. Program operations will be discontinued and a final report issued by December 2004. In addition to this program, DOD officials told us that a Combat Assessment Working Group was recently established at the Joint Staff to discuss ways to address problems with the battle damage assessment process. However, the group had not developed formal recommendations at the completion of our audit work in March 2004. Some DOD officials have called for more effort to be focused on assessing battle damages from an “effects-based” framework. The effects-based operational concept calls for an increased emphasis on conducting military operations and assessing their effects, in terms of the military and nonmilitary effects sought—rather than in terms of simply the destruction of a given target or an adversary. According to a recent Defense Science Board report, the emergence of this concept has been influenced by the opportunity provided by precision weapons, shared situation awareness, and other advances enabling the precise use of force, as well as the needs presented by the nature of current military campaigns. Operations from Kosovo to Iraq have been characterized by tension among multiple strategic and operational objectives: destroy enemy infantry and air defenses and drive the current regime from power, but do not injure civilians or damage necessary infrastructure. The use of an effects-based battle damage assessment approach would mean that instead of the traditional focus only on damage or destruction of a target, battle damage assessments should also attempt to determine whether command objectives are being met by other influences in the battlefield. For example, initial bombing attacks on nearby targets may persuade enemy troops to abandon a target facility, eliminating the need to bomb the target facility at all. According to the Joint Forces Command’s report on Iraqi Freedom, commanders in Iraq attempted to use an effects- based approach to analyze military operations. However, when the speed of operations exceeded their capability to analyze and assess how actions were changing the Iraqi system, they reverted to the traditional focus on simple attrition measures. Coalition forces reverted to counting specific numbers of targets destroyed to determine combat progress, rather than evaluating the broader effect created on the enemy. The command has called for recognition of problems with battle damage assessments as a major obstacle to effects-based operations, requiring a variety of changes to resolve. DOD officials also told us that the traditional focus on damage and destruction results in leaders relying too much on visual imagery to assess battle damages. This problem can cause leaders to delay battlefield progress until full visual confirmation of the desired affect is confirmed. According to these officials, given the increasingly reliable nature of precision weapons, it may be possible in some cases to rely on predicted or probabilistic effects, rather than full visual confirmation. DOD does not have a unified battlefield data collection system to provide standardized measures and baseline data on the efficiency and effectiveness of bombing operations. According to DOD officials, the current system for collecting operational data is for the services and the unified commands to maintain their own databases, which are often quite extensive. Precisely how data is defined, gathered, and analyzed is at the discretion of each individual component and addresses specific needs. These unique requirements lead to different purposes for conducting analyses, different data collection approaches, and different definitions of key data elements. For example, to better understand the impact of the tactical and technological changes on the efficiency and effectiveness of bombing operations, we analyzed the number of attacks and bombs required to damage or destroy a given target for operations in Kosovo and Afghanistan. A number of DOD officials told us that advances in the accuracy of bombing operations have raised the expectation that fewer attacks and bombs are now required to damage or destroy targets. Instead of traditional operations—where multiple sorties and multiple bombs were required to destroy one target—some officials now believe one bomb per target and multiple targets on one sortie should be the norm. The results of our analyses tended to support the idea that it took fewer attacks to damage or destroy targets in Afghanistan than in Kosovo. However, we could not gain agreement from the services on the results of these analyses because each had its own system for measuring operations, and the measures also differed from the ones used in our analysis. The question of how many attacks are required to damage or destroy a target is basic to understanding battlefield effectiveness; however, we found no consistency among the services and the unified commands as to which of several basic measures should be used. Some group information about attacks based on “sorties”—defined as the takeoff and landing of one aircraft, during which one or more aim points may be attacked. Others do not attempt to group information based on sorties, making comparisons of information between databases difficult and confusing. For example, because the Central Command was in charge of operations in Afghanistan, we used its database to analyze bombing operations during Operation Enduring Freedom and compare those with the results of our classified review of Kosovo bombing operations. The Central Command’s database provides information about aircraft attacks and damages to aim points, since it is focused primarily on assessing battle damages. However, it does not provide the information needed to analyze by sortie, since it does not identify activities that took place between a given takeoff and landing. To compare the Central Command’s data with our data on Kosovo, we grouped the information on the basis of attacks. An attack was defined as each time that a single aircraft dropped one or more weapons on any single aim point. Based on this definition, our analysis found that it took fewer attacks to damage or destroy both fixed and mobile targets during operations in Afghanistan than during operations in Kosovo. Similar comparisons could not be made with the Air Force’s and Navy’s databases on Operation Enduring Freedom because their data are not maintained based on this definition of an attack. Both services list data by aircraft sortie. More specifically, each record in the Air Force’s database corresponds to one delivery of a specific weapon type against an aim point, with each weapon delivery linked to a particular sortie and mission in the air tasking order. For the Navy’s analysis, which describes the percentage of sorties that dropped weapons, each sortie can have one or multiple attacks, defined as one run at a given target. Because both the Air Force’s and the Navy’s analyses are primarily assessments of weapons and not intended to measure battle damage information, the main focus is assessing data for and based on specific weapon drops. As a result, they contain no analysis that links the relationship between the number of sorties flown and the corresponding damage. A second basic element of effectiveness is whether or not bombing actions resulted in the desired effects. The services and the Central Command also differed in their approaches to measuring this element, further complicating analysis. The Central Command’s database provides information on effects based on battle damage assessments, since measuring battle damage is the primary responsibility of the unified commands. However, the service databases are geared toward measuring the performance of specific systems. The Air Force, for example, primarily focused its analysis of operations in Afghanistan on a munitions effectiveness assessment. This analysis measures the actual success of individual weapons against predicted results and does not address battle damage assessments. The analysis measures whether the bomb landed outside an area around the target within which the bomb was predicted to hit, known as the circular error probable. Air Force officials stated that it is possible for a weapon to be scored a miss for Air Force munitions effectiveness assessment purposes, but still cause significant damage to a target. According to the Air Force’s analysis, the vast majority of munitions employed in Operation Enduring Freedom performed significantly better than expected. This could mean that the Air Force can adjust its planning and modeling assumptions to lower the number of sorties expected to be required to destroy a target. Similar to the Air Force’s analysis, the Navy measured effects based on weapon hit rates. However, the Navy’s analysis assessed what fraction of Navy bombs that were dropped impacted the intended target and had a high order detonation, determined primarily by reviewing weapons system videos. According to officials, if a weapon hit the target and had a high order detonation, it was counted as a successful hit for analysis purposes. The Navy’s analysis did not measure whether a weapon fell within the planned circular error probable, nor did it measure battle damages. The services and the U.S. Central Command also differ in their treatment of the basic question of how to define a target as fixed or mobile. This distinction is important to considerations of effectiveness because it is much harder to hit mobile than fixed targets. Moreover, mobile targets may be becoming more numerous as adversaries attempt to use mobility to avoid the effectiveness of precision weapons. Inconsistent definitions of fixed and mobile targets result in different classifications of like targets and disagreement among officials when attempting to measure the relative effectiveness of bombing attacks against mobile and fixed targets. The Navy’s analysis, for example, classifies mobile targets as “mobile” and “moving.” According to the analysis, mobile targets are those that can move between the time of launch and the time of impact, such as vehicles and aircraft. Moving targets are those that are actually moving when they are hit. Classification results are determined by a direct review of weapon system video or documentation in mission reports. Unlike the analysis, the Central Command’s database classifies all targets capable of moving as mobile whether they are moving at the time of attack or not. The classification of moving is not used because such information is more detailed than is needed for battle damage assessment purposes. In contrast, the Air Force’s database does not classify targets as fixed or mobile. The database provides a description of the desired aim point, such as the center of a runway or troops, but leaves it up to the user to define which are mobile and which are fixed. There is a field for moving targets in the database, but according to Air Force officials, very few records have an entry in this field. Targets are only classified as moving when there is available weapon system video to confirm that the target was moving at the time the weapon was dropped. As a result of these differences, an attack on a truck that is moving at the time of an attack would be classified as mobile by the Central Command, as moving by Navy officials, and as either mobile or moving to Air Force officials, depending on the availability of weapon system video. Fixed targets are also classified differently in some cases. For example, according to Navy officials, there are several types of fixed targets. Troops are classified as a fixed, area target because individual troops are not targeted with aircraft but rather as an area occupied by troops. However, buildings are classified as fixed, point targets where there is a specific place to hit. In contrast, the Central Command classifies fixed targets only as those that are not able to move, such as buildings. The absence of a baseline system to bridge definitional and other differences and provide clear, consistent information about actual bombing effectiveness creates confusion in several areas. For example, this confusion was graphically illustrated when we provided the results of our analyses to the services. The results tended to support the idea that it took fewer attacks to damage or destroy targets in Afghanistan than in Kosovo. However, we could not gain agreement from the services on the results because our analyses were based on Central Command data that differed from that in their own systems, as previously discussed. Similar confusion occurred over the results of our March 2002 classified analysis of bombing operations in Kosovo. DOD did not concur with our use of the Air Force’s Mission Analysis Tracking and Tabulation System database to analyze bombing operations, stating that no single database is completely accurate and contains all information needed for the analysis. However, that database was the most comprehensive available, developed specifically as a primary database for tracking airframe and weapon effectiveness during Operation Allied Force, and was used by DOD as the basis for its January 2000 report to Congress on operations in Kosovo. DOD cannot clearly resolve such confusion until baseline definitions of effectiveness measures are reconciled and a unified database developed. Further, reliable, consistent data on such issues is needed to make procurement decisions on the number of bombs and other resources DOD will need to procure for future conflicts. In this regard, we recently reported that differences in battle simulation models and scenarios used by the services and the unified commands were resulting in different estimates of munitions needed for operations, and, ultimately, in reports of munitions shortages. Clear, consistent, and up-to-date measures of the effectiveness of precision weapons—such as the actual number of aircraft and bombs required to achieve targeting objectives—could help resolve such differences and improve procurement and other planning decisions. In addition, as discussed earlier, precision weapons can be considerably more expensive than traditional munitions. Without clear data on bombing effectiveness, DOD cannot analyze the return on investment from the trade-off of fewer, but more expensive, precision weapons versus the use of more, but less expensive, traditional munitions. Both the Joint Forces Command and the Defense Science Board found that current training does not provide the realistic preparation needed to cope with the emerging operating environment. DOD officials raised concerns that the changing strategy and technological improvements have created large increases in the pace of operations and volume of information that have overwhelmed commanders and other personnel at times. Further, advances in networking the force and other changes have fostered a more centralized style of management, with senior leaders increasingly involved in operations. At the same time, however, network- centric operating concepts are distributing information to lower and lower organizational levels, raising the potential for increased autonomy for small units and individual soldiers. According to DOD officials, personnel at all levels, but particularly commanders, need realistic training to understand this new environment and adapt to it to ensure that the new capabilities are used to their fullest advantage. DOD officials told us that network-centric operations have advanced to the point that the heavy flow of information and rapid pace of operations may at times overload systems and personnel. This problem can create confusion and inefficiency as systems for conducting battle damage assessments or other operations become slow and clogged while sorting and integrating large amounts of information, and officials are distracted by having to devote precious time to sorting through hundreds of e-mail messages or by attending increasingly frequent videoconferences. Moreover, officials also believe that this problem may get worse as commanders increasingly recognize the advantages of networked systems, creating a need for even more information. The officials also stated that increased networking is fostering a more centralized style of command and control, which can create tension between command staffs and operators in the field. For example, according to officials, lawyers and senior civilian and military leaders at headquarters locations remote from the execution of operations are becoming increasingly involved in target selection and other operational areas. Historically, one of the principal tenets of U.S. command and control has been centralized direction, but decentralized execution of operations to give subordinates on the scene sufficient freedom of action to accomplish their missions. Increased centralization in the execution of operations can result in senior commanders being bogged down in operational details and subordinates on the scene losing initiative. This development has been linked to the advances in technologies that provide the opportunity for detailed views of the battlefield and frequent videoconferences and other communications to be shared among a wide array of officials that may be located thousands of miles away. This trend is also influenced by increased concerns over sensitive issues such as the avoidance of intrusions into the airspace of neighboring countries and collateral damage to civilian structures. Such issues act as an incentive for senior leaders to increase their involvement in lower and lower levels of planning and operations. While senior leaders are becoming increasingly involved in operations, information is also being distributed to lower and lower organizational levels, raising the potential for increased autonomy for small units and individual soldiers. For example, one of the principal organizing and operating tenets of network-centric operations is the concept called power to the edge. This concept involves empowering individuals at the “edge” of an organization—where it interacts with its operating environment—by expanding access to information and eliminating unnecessary constraints on action. According to department officials, adopting this concept requires DOD to change the way it handles intelligence and other information. For example, DOD’s current information systems are based on data requirements that are focused on the needs of the organizations supplying the data, with dissemination of the data based on a sequential process with information pushed out to customers at the end. But DOD is now moving to systems where broad arrays of information are placed on networks before any unnecessary processing at the point of collection, with total access for customers who can pull out the information that each needs simultaneously. This provides more information to lower organizational levels, enabling them to operate more autonomously with less direct control by commanders. According to officials at the Joint Forces Command, this concept helped DOD use smaller formations of personnel with flexible command and control relationships to great advantage during operations in Iraq. Consistent with DOD’s basic tenet that the force must train as it will fight, DOD officials have called for improved, more realistic training to match the scale and tempo of actual operations. For example, the Joint Forces Command reported that the lack of realistic training undermined theater- level intelligence, surveillance, and reconnaissance management and other operational level capabilities during Operation Iraqi Freedom. Similarly, the Defense Science Board reported that the changing operating environment will have unintended human consequences that will require personnel to adapt to increasing cognitive demands at even the most junior levels, and to think and act more quickly. According to the Board, current training will not adequately prepare DOD personnel to cope with the increasing and constantly changing cognitive requirements. DOD officials also cautioned that the joint operational effectiveness experienced in Operation Iraqi Freedom was often the result of procedures developed during 18 months of practice begun during operations in Afghanistan and that such improvements are often fleeting— needing to be reinvented in the next contingency. The Joint Forces Command called for development of an improved joint training capability to institutionalize the operating procedures developed in Iraq and allow commanders and staffs to experiment with and practice operational-level processes. Moreover, service and DOD officials also noted that expectations for the future need to be tempered with the understanding that operations in Kosovo, Afghanistan, and Iraq were conducted with other advantages—such as largely complete air superiority—that may not be available in future conflicts. The development of networked surveillance and command and control systems, precision weapons, and other advances has combined to have a synergetic effect on U.S. military power—providing increased capabilities for dealing effectively with enemies operating out of nontraditional battlefields, as well as more traditional approaches to warfare. Notwithstanding these advances, the full impact of these changes is still emerging and is not fully understood. Moreover, the enemy is likely to continue to evolve and adapt its approaches in response to the continued evolution of U.S. tactics and capabilities. As a result, it is important to continue developing and refining these capabilities. However, the legacy of DOD’s traditional focus on service-specific operations is inhibiting the continued evolution of the new capabilities. The lack of standardized, interoperable systems and equipment interferes with the development of force networks, slowing operations and reducing effectiveness. Difficulties in quickly obtaining sufficient numbers of trained battle damage analysts result in slowed assessments unable to keep up with the increased pace of operations, inhibiting battleground progress and the utility of improvements in other areas. Similarly, the absence of a unified battlefield information system also confuses the clear understanding of improvements to the efficiency and effectiveness of operations as a result of changing capabilities, slowing the rate of adaptation to changing battlefield conditions. Finally, the lack of realistic training limits the ability of leaders to understand and systems to sense changes in the operating environment—such as the increased pace of operations and flow of information, the increased centralization of command, and the increased potential for operational autonomy and self-direction of small units and individual soldiers, as well as emerging concepts such as effects-based operations—further inhibiting the ability to adapt. To ensure continuing evolution of the capabilities demonstrated in recent conflicts, we recommend that the Secretary of Defense direct the Joint Staff, the Joint Forces Command and other unified commands, and the military departments to take the following four actions: identify the primary information required for bombing operations, such as targeting and battle damage assessments, ensure that planned interoperability enhancements provide the standardized definitions, mission reporting formats, and other necessary instructions for this information to be used by all unified commands during joint combat operations, and determine whether this standardized information can replace that used by the individual services; formulate a plan to provide sufficient numbers of personnel trained in battle damage assessment procedures when they are needed for combat operations and include in the plan the following: incentives for personnel to take the existing joint training on damage assessment, development of a system to be used by the Joint Forces Command to track and mobilize personnel who have received damage assessment training for use during surge situations, and development of guidance on the appropriate use of effects-based, probabilistic, and other nontraditional concepts in assessing battle damages; develop a unified battlefield information system that provides for the identification and collection of data on key, standardized measures of bombing operations needed to assess the basic efficiency and effectiveness of such operations, for use by all unified commands; and develop a joint operations training capability that provides commanders and staffs with a realistic simulation of the increased pace of operations and other emerging changes to the combat operating environment. In written comments on a draft of this report, DOD concurred or partially concurred with all our recommendations. DOD stated that the Joint Staff, in coordination with the Joint Forces Command, is addressing our recommendations for actions to improve standardization of information used in bombing operations, develop a unified battlefield information system, and develop realistic joint training to help personnel adapt to changes in the operating environment in various ongoing initiatives. DOD partially agreed with our recommendation to improve the battle damage assessment process and stated that it is addressing the issues we raised in the Joint Network Fires Capability Roadmap, the Joint Close Air Support action plan, and other efforts. However, DOD believed that the section of the report titled “Timely Understanding of Battle Damages Remains a Difficult Problem” discusses battle damage assessments as if that function was detached from the broader targeting process. That was not our intent. As indicated on page 6 of the report, we agree that battle damage assessments are an integral part of the broader targeting process. The use of a separate section of the report to deal with that aspect of targeting was meant only to highlight the long-standing problems with battle damage assessments and the need to focus DOD’s attention on corrective action. Officials from the U.S. Central Command, which was in charge of operations in Afghanistan and Iraq, and the Joint Forces Command report on lessons learned in Iraq both pointed to the need to elevate recognition of problems in the battle damage assessment process and address them. Continued improvement in the speed at which targets are generated and attacked will only further increase the need for damage assessments to keep pace with operations in the future. DOD’s comments are reprinted in appendix III. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (757) 552-8100. The major contributors to this report are listed in appendix IV. To assess the impact on operational effectiveness of improvements in networking the force and the use of precision weapons and identify the key barriers to continued progress in implementing the new strategy, we followed a three-phased approach. To identify Department of Defense (DOD), military service, and unified command policies and approaches to implementing the new strategy, we obtained briefings, reviewed DOD and unified command directives and regulations, the Operation Enduring Freedom Campaign Plan, lessons learned reports, and prior reports by us and others. A bibliography of key reports on issues related to our review is included. We also interviewed officials from the Office of the Secretary of Defense; the Office of the Joint Chiefs of Staff; the U.S. Central Command; the U.S. Joint Forces Command; the U.S. Special Operations Command; headquarters offices of the Army, Navy, and Air Force; and other offices as appropriate. We accompanied this work with a detailed analysis of bombing data developed for our March 2002 classified report on air operations in Kosovo and bombing data on operations in Afghanistan provided by the U.S. Central Command. Prior to conducting these analyses, we discussed the appropriate databases to use, the time frames to measure, and other such methodological issues with officials from the Central Command. We used Central Command data because its commander was in charge of joint operations in both Afghanistan and Iraq. To determine whether bombing accuracy and effectiveness had improved, we compared changes in the percentage of attacks resulting in damage or destruction to fixed and mobile targets, the number of attacks and the number of bombs during a given attack that were required to damage or destroy a given target, and other such measures of operations in Kosovo and Afghanistan. We then provided the results of these analyses to officials from the Office of the Secretary of Defense; the Office of the Joint Chiefs of Staff; the U.S. Central Command; the U.S. Joint Forces Command; the U.S. Special Operations Command; and the Army, Navy, and Air Force for their review and comment. We also obtained analyses of Operation Enduring Freedom from the Navy and the Air Force for comparison purposes. We requested data from the Army, but officials were unable to provide such data. We also requested copies of any similar analyses of operations in Iraq, but officials were unable to locate any such analyses. We did not conduct our own detailed analysis of operations in Iraq because of the extremely resource intensive and time-consuming nature of these analyses. To assess the reliability of the Central Command’s database for Operation Enduring Freedom, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) reviewed related documentation, including tracking target files to specific data entries, and interviewed agency officials knowledgeable about the data; and (3) worked closely with agency officials to identify any data problems. When we found discrepancies such as missing or incorrect data, we brought them to the command’s attention and worked with it to correct the discrepancies before conducting our analysis. We determined that the data were sufficiently reliable for our reporting purposes. Following this analysis, we conducted a series of roundtable discussions with officials from the offices of the Secretary of Defense, Joint Chiefs of Staff, unified commands, and the services contacted previously. We conducted these discussions to gain a detailed understanding of the results of our analyses and officials’ perspectives on the impact of the changing strategy on operations in Kosovo, Afghanistan, and Iraq and the key barriers to continued progress in implementing the new strategy. We focused our analysis on combat bombing operations. We did not attempt to analyze whether larger operational and strategic objectives were achieved. The RC-135 Rivet Joint is a reconnaissance aircraft that supports theater and national level consumers with near real-time on-scene intelligence collection, analysis, and dissemination capabilities. Its onboard sensor suite allows the crew to detect, identify, and locate signals throughout the electromagnetic spectrum, which it can then forward to a wide range of consumers. The U-2 provides continuous day and night, high-altitude, all-weather surveillance and reconnaissance in support of ground and air forces. The U-2 is capable of collecting multi-sensor photo, electro-optic, infrared and radar imagery, as well as collecting signals intelligence data, with imagery, real-time down linking of data anywhere in the world. The E-8C Joint Surveillance Target Attack Radar System is an airborne battle management, command and control, intelligence, surveillance, and reconnaissance aircraft. Its radar and computer systems allow it to provide ground and air commanders with detailed information on ground forces to support attack operations and targeting. The EP-3E (Aries II) is the Navy's only land based signals intelligence reconnaissance aircraft. Its sensitive receivers and high-gain dish antennas allow it to detect a wide range of electronic emissions from deep within targeted territory. The E-3 Sentry is an airborne warning and control system aircraft that provides all-weather surveillance, command, control, and communications to command and control centers. Its radar and computer systems enable it to provide positions and tracing information on enemy aircraft and ships, and location and status of friendly aircraft and ships. The Predator is a medium-altitude, long-endurance unmanned aerial vehicle reconnaissance system composed of four aircraft with sensors, a ground control station, a satellite link, and some 82 personnel providing 24-hour operations. Its primary mission is interdiction and conducting armed reconnaissance against critical targets. The Global Hawk unmanned aerial vehicle is a reconnaissance aircraft that provides battlefield commanders near real-time, high-reconnaissance imagery. Typically cruising at high altitudes for 24 continuous hours, it uses its cloud penetrating radar and other sensors to survey large geographic areas and relay imagery about enemy locations and resources to commanders. available) Guided Bomb Units-10, 12, and 16 are laser-guided bombs. These bombs consist of guidance packages bolted to traditional free-fall bombs (2,000, 500, and 1,000 pounds, respectively), enabling the bombs to analyze laser energy shone on a target by an operator, and then to adjust the path of the bomb as it descends on a target. The Joint Direct Attack Munitions Guided Bomb Unit-31/32 consists of a guidance tail kit attached to a traditional 2,000-pound free-fall bomb, enabling it to be navigated in flight to the selected target using Global Positioning System satellite technology. The Cluster Bomb Unit 87/B Combined Effects Munitions is a 1,000-pound unguided, air-delivered cluster bomb consisting of a cluster of about 200 bomblets that disperse over the target area and explode on impact. This bomb is effective against armor, personnel, and material, enabling a single payload attack against a wide variety of targets. The Navstar Global Positioning System is a constellation of 24 orbiting satellites operated by the Air Force that provides navigation data to military and civilian users all over the world. The satellites orbit the earth every 12 hours, emitting navigation signals that are picked up by receivers and used to calculate time, location, and velocity. A laser designator/rangefinder (U.S. Marine Corps AN/PAQ-3 pictured) is used to locate targets and guide laser-guided weapons to the target. Designators radiate a narrow beam of pulsed energy that is used to mark a spot on the target that is then picked up by acquisition devices mounted on aircraft or directly on laser-guided bombs. In addition to those named above, Katherine Chenault, Steve Pruitt, R.K. Wild, and Kristy Williams made key contributions to this report. Center for Strategic and Budgetary Assessments. Operation Iraqi Freedom: A First-Blush Assessment. Washington, D.C.: 2003. Center for Strategic and International Studies. The Lessons of Afghanistan: A First Analysis. Washington, D.C.: 2002. Center For Strategic and International Studies. The U.S. Military and the Evolving Challenges in the Middle East. Washington, D.C.: 2002. Congressional Budget Office. The Army’s Bandwidth Bottleneck. Washington, D.C.: 2003. Congressional Research Service. Kosovo and Macedonia: U.S. and Allied Military Operations. Washington, D.C.: 2003. Northrop Grumman Corporation, Analysis Center. Destroying Mobile Ground Targets In An Anti-Access Environment. Washington, D.C.: 2001. Northrop Grumman Corporation, Analysis Center. Future War: What Trends in America’s Post-Cold War Military Conflicts Tell Us About Early 21st Century Warfare. Washington, D.C.: 2003. U.S. Department of Defense, Air Force Air Combat Command. Munitions Effectiveness Analysis Final Report for Air Force Air-to-Surface Munitions in Operation Enduring Freedom. Washington, D.C.: 2003. U.S. Department of Defense, Air Force Task Force Enduring Look. Quick Look Report #5-Coercive Airpower from the Enemy’s Perspective The Collapse of the Taliban. Washington, D.C.: 2002. U.S. Department of Defense, Central Command. Operation Iraqi Freedom—By the Numbers. Washington, D.C.: 2003. Center for Naval Analyses. Overview of Carrier-based Strike-Fighter Operations in Operation Enduring Freedom. Alexandria, Virginia: 2003. U.S. Department of Defense, Central Command. Operation Enduring Freedom Interim Munitions Effectiveness Assessment. Washington, D.C.: 2002. U.S. Department of Defense. Defense Science Board Task Force on Training for Future Conflicts—Final Report. Washington, D.C.: 2003. U.S. Department of Defense. Defense Science Board Task Force on Wideband Radio Frequency Modulation—Dynamic Access to Mobile Information Networks. Washington, D.C.: 2003. U.S. Department of Defense, Joint Battle Damage Assessment Joint Test and Evaluation. Operation Enduring Freedom Test Report. Suffolk, Virginia: 2002. U.S. Department of Defense, Joint Forces Command. Joint Fires Initiative—Operational-Level Management of Time-Sensitive Targeting. Norfolk, Virginia: 2003. U.S. Department of Defense, Joint Forces Command. Joint Lessons Learned: Operation Iraqi Freedom Major Combat Operations. Norfolk, Virginia: 2004. U.S. Naval Institute. What Can We Learn from Enduring Freedom? Annapolis, Maryland: 2002. U.S. Department of Defense, Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics. High Leverage Lessons Learned from Operation Enduring Freedom—Phase II Report of the Defense Science Board Task Force on Operation Enduring Freedom Lessons Learned. Washington, D.C.: 2003. U.S. Department of Defense, Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics. Organizational Lesson Learned Review- Phase III Report, Defense Science Board Task Force on Operation Enduring Freedom Lessons Learned. Washington, D.C.: 2002. U.S. Department of Defense, Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics. Precision Targeting and Joint Close Air Support, A Phase II Report of the Defense Science Board Task Force on Operation Enduring Freedom Lessons Learned. Washington, D.C.: 2003. U.S. Department of Defense, Office of the Under Secretary of Defense For Acquisition, Technology, and Logistics. Report of the Defense Science Board Task Force on Discriminate Use of Force. Washington, D.C.: 2003. U.S. Department of Defense, Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. Report of the Defense Science Board Task Force on Enabling Joint Force Capabilities. Washington, D.C.: 2003. U.S. Department of Defense. Report to Congress: Network Centric Warfare. Washington, D.C.: 2001. U.S. Department of Defense. Report to Congress: Kosovo/Operation Allied Force After-Action Report. Washington, D.C.: 2000. U.S. Department of Defense. Report on Network Centric Warfare: Sense of the Report. Washington, D.C.: 2001. U.S. Department of Defense, Strategic Studies Institute of the U.S. Army War College. Afghanistan and the Future of Warfare: Implications for Army and Defense Policy. Carlisle, Pennsylvania: 2002. U.S. Department of Defense, Strategic Studies Institute of the U.S. Army War College. Iraq and the Future of Warfare: Implications for Army and Defense Policy. Carlisle, Pennsylvania: 2003. Military Readiness: Lingering Training and Equipment Issues Hamper Air Support of Ground Forces. GAO-03-505. Washington, D.C.: May 2, 2003. Defense Acquisitions: Steps Needed to Ensure Interoperability of Systems That Process Intelligence Data. GAO-03-329. Washington, D.C.: March 31, 2003. Major Management Challenges and Program Risks—Department of Defense. GAO-03-98. Washington, D.C.: January 2003. Defense Management: Munitions Requirements and Combatant Commanders’ Needs Require Linkage. GAO-03-17. Washington, D.C.: October 15, 2002. DOD Financial Management: Integrated Approach, Accountability, Transparency, and Incentives Are Keys to Effective Reform. GAO-02-497T. Washington, D.C.: March 6, 2002. Kosovo Air Operations: Need to Maintain Alliance Cohesion Resulted in Doctrinal Departures. GAO-01-784. Washington, D.C.: July 27, 2001. Defense Logistics: Unfinished Actions Limit Reliability of the Munitions Requirements Determination Process. GAO-01-18. Washington, D.C.: April 5, 2001. Defense Management: Actions Needed to Sustain Reform Initiatives and Achieve Greater Results. GAO/NSIAD-00-72. Washington, D.C.: July 25, 2000. Operation Desert Storm: Evaluation of the Air Campaign. GAO/NSIAD-97-134. Washington, D.C.: June 12, 1997. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Recent U.S. combat operations in Kosovo, Afghanistan, and Iraq benefited from new Department of Defense (DOD) strategies and technologies, such as improvements in force networks and increased use of precision weapons, designed to address changes in the security environment resulting from the continuing terrorist threat and the advent of the information age. Based on the authority of the Comptroller General, GAO reviewed these conflicts, with a focus on bombing operations, to gain insight into the changes being implemented by DOD. This report focuses on (1) assessing the impact on operational effectiveness of improvements in force networks and in the use of precision weapons and (2) identifying key barriers to continued progress. Improvements in force networks and in the use of precision weapons are clearly primary reasons for the overwhelming combat power demonstrated in recent operations. However, the full extent to which operations have been speeded up or otherwise affected is unclear because DOD does not have detailed measures of these effects. Enhancements to networked operations, such as improved sensors and surveillance mechanisms, and more integrated command and control centers, have improved DOD's ability to share a broad view of the battlefield and communicate quickly with all elements of the force--reducing the time required for analysis and decision making in combat operations. However, recognizing that the full impact of these changes is unclear, DOD is conducting a series of case studies to better understand the effects of networked operations. Improvements in force networks have also been enhanced by the use of precision-guided weapons and associated technologies. These improvements not only provide commanders with greatly increased flexibility, such as the ability to conduct bombing operations in poor weather and from higher and safer altitudes, but also increase the accuracy of bombing operations. GAO's analysis found that the percentage of attacks resulting in damage or destruction to targets increased markedly between operations in Kosovo and those in Afghanistan. Notwithstanding these improvements, certain barriers inhibit continued progress in implementing the new strategy. Four interrelated areas stand out as key: (1) a lack of standardized, interoperable systems and equipment, which reduces effectiveness by requiring operations to be slowed to manually reconcile information from multiple systems and limiting access to needed capabilities among military services; (2) continuing difficulties in obtaining timely, high quality analyses of bombing damages, which can slow ground advances and negate other improvements in the speed of operations; (3) the absence of a unified battlefield information system to provide standardized measures and baseline data on bombing effectiveness, which creates confusion about the success of new tactics and technologies, about assumptions used in battlefield simulation programs, and about procurement decisions; and (4) the lack of high quality, realistic training to help personnel at all levels understand and adapt to the increased flow of information, more centralized management, and other changes in the operating environment brought about by the strategic changes.
The broader context of U.S. efforts for Iraqi reconstruction is tied to how missions and projects are being conducted and managed. Over the past decade, DOD has increasingly relied on contractors to provide a range of mission-critical services. Overall, DOD’s obligations on service contracts rose from $82.3 billion in fiscal year 1996 to $141.2 billion in fiscal year 2005. According to DOD officials, the amount obligated on service contracts exceeded the amount the department spent on major weapon systems. The growth in spending for services has coincided with decreases in DOD’s workforce. DOD carried out this downsizing, however, without ensuring that it had the specific skills and competencies needed to accomplish DOD’s mission. For example, the amount, nature, and complexity of contracting for services have increased, which has challenged DOD’s ability to maintain a workforce with the requisite knowledge of market conditions and industry trends, the ability to prepare clear statements of work, the technical details about the services they procure, and the capacity to manage and oversee contractors. Participants in an October 2005 GAO forum on Managing the Supplier Base for the 21st Century commented that the current federal acquisition workforce significantly lacks the new business skills needed to act as contract managers. Contractors have an important role to play in the discharge of the government’s responsibilities, and in some cases the use of contractors can result in improved economy, efficiency, and effectiveness. At the same time, there may be occasions when contractors are used to provide certain services because the government lacks another viable and timely option. In such cases, the government may actually be paying more and incurring higher risk than if such services were provided by federal employees. In this environment of increased reliance on contractors, sound planning and contract execution are critical for success. We have previously identified the need to examine the appropriate role for contractors to be among the challenges in meeting the nation’s defense and other needs in the 21st century. The proper role of contractors in providing services to the government is currently the topic of some debate. In general, I believe there is a need to focus greater attention on what type of functions and activities should be contracted out and which ones should not, to review and reconsider the current independence and conflict of interest rules relating to contractors, and to identify the factors that prompt the government to use contractors in circumstances where the proper choice might be the use of civil servants or military personnel. Possible factors could include inadequate force structure; outdated or inadequate hiring policies, classification and compensation approaches; and inadequate numbers of full-time equivalent slots. Turning to Iraq, DOD has relied extensively on contractors to undertake major reconstruction projects and provide support to its troops. For example, DOD has responsibility for a significant portion of the more than $30 billion in appropriated reconstruction funds and has awarded and managed many of the large reconstruction contracts, such as the contracts to rebuild Iraq’s oil, water, and electrical infrastructure, and to train and equip Iraqi security forces. Further, U.S. military forces in Iraq have used contractors to a far greater extent than in prior operations to provide interpreters and intelligence analysts, as well as more traditional services such as weapon systems maintenance and base operations support. The Army alone estimates that almost 60,000 contractor employees currently support ongoing military operations in Southwest Asia and has spent about $15.4 billion on its single largest support contract—the Logistics Civil Augmentation Program (LOGCAP)—between 2001 and 2004. Reconstruction and support contracts are often cost-reimbursement-type contracts, which allow the contractor to be reimbursed for reasonable, allowable, and allocable costs to the extent prescribed in the contracts. Further, these contracts often contain award fee provisions, which are intended to incentivize more efficient and effective contractor performance. If contracts are not effectively managed and given sufficient oversight, the government’s risk is likely to increase. For example, we have reported DOD needs to conduct periodic reviews of services provided under cost-reimbursement contracts to ensure that services are being provided and at an appropriate level and quality. Without such a review, the government is at risk to pay for services it no longer needs. DOD’s reliance on contractors for key reconstruction efforts and support to deployed forces requires that DOD create the conditions conducive for success. Our work has shown that these conditions include a match between requirements and resources, sound acquisition approaches, leadership and guidance, visibility and knowledge of the number of contractors and the services they provide, and the capacity to manage and assess contractor performance. As we have previously reported, in many cases these conditions were not present on DOD reconstruction and support contracts, increasing the potential for fraud, waste, abuse, and mismanagement. Several of my colleagues in the accountability community and I have developed a definition of waste. As we see it, waste involves the taxpayers in the aggregate not receiving reasonable value for money in connection with any government funded activities due to an inappropriate act or omission by players with control over or access to government resources (e.g., executive, judicial or legislative branch employees, contractors, grantees or other recipients). Importantly, waste involves a transgression that is less than fraud and abuse. Further, most waste does not involve a violation of law, but rather relates primarily to mismanagement, inappropriate actions, or inadequate oversight. Illustrative examples of waste could include: unreasonable, unrealistic, inadequate or frequently changing proceeding with development or production of systems without achieving an adequate maturity of related technologies in situations where there is no compelling national security interest to do so; the failure to use competitive bidding in appropriate circumstances; an over-reliance on cost-plus contracting arrangements where reasonable alternatives are available; the payment of incentive and award fees in circumstances where the contractor’s performance, in terms of costs, schedule and quality outcomes, does not justify such fees; the failure to engage in selected pre-contracting activities for contingent events; Congressional directions (e.g. earmarks) and agency spending actions where the action would not otherwise be taken based on an objective value and risk assessment and considering available resources. A prerequisite to having good outcomes is a match between well-defined requirements and available resources. Shifts in priorities and funding, even those made for good reasons, invariably have a cascading effect on individual contracts, making it more difficult to manage individual projects to successful outcomes and complicate efforts to hold DOD and contractors accountable for acquisition outcomes. I should note such problems reflect some of the systemic and long-standing challenges confronting DOD, whether on contracts for services or major weapon systems. Contracts, especially service contracts, often do not have definitive or realistic requirements at the outset needed to control costs and facilitate accountability. U.S. reconstruction goals were based on assumptions about the money and time needed, as well as a permissive security environment, all of which have proven unfounded. U.S. funding was not meant to rebuild Iraq’s entire infrastructure, but rather to lay the groundwork for a longer- term reconstruction effort that anticipated significant assistance from international donors. To provide that foundation, the Coalition Provisional Authority (CPA) allocated $18.4 billion in fiscal year 2004 reconstruction funds among various projects in each reconstruction sector, such as oil, electricity, and water and sanitation. The CPA used a multitiered contracting approach to manage and execute the projects. In this case, the CPA, through various military organizations, awarded 1 lead contractor, 6 sector contractors, and 12 design-build contracts in early 2004 (see fig. 1). After the CPA dissolved, the Department of State initiated an examination of the priorities and programs with the objectives of reprioritizing funding for projects that would not begin until mid- to late-2005 and using those funds to target key high-impact projects. By July 2005, the State Department had conducted a series of funding reallocations to address new priorities, including increased support for security and law enforcement efforts and oil infrastructure enhancements. One of the consequences of these reallocations was to reduce funding for the water and sanitation sector by about 44 percent, from $4.6 billion to $2.6 billion. One reallocation of $1.9 billion in September 2004 led DOD’s Project and Contracting Office to cancel some projects, most of which had been planned to start in mid-2005. Additionally, higher than anticipated costs associated with using the large design-build contracts contributed to DOD’s decision to directly contract with Iraqi firms. For example, in the electricity sector, high cost estimates by one design-build contractor resulted in the termination of five task orders and the resolicitation of that work. After the task orders were canceled, the design-builder was slow to reduce overhead costs in accordance with the reduced workload, according to agency officials and documents. DOD is now directly contracting with Iraqi firms to reduce the costs of reconstruction efforts not requiring advanced technical and management expertise, such as erecting electrical distribution projects. Similarly, in the transportation sector, the design-build contractor demobilized and left Iraq shortly after award of the contract in March 2004 because DOD and the contractor agreed that the overall program costs were too high. Subsequently, DOD has made greater use of Iraqi contractors who were experienced in building roads and bridges. Further, the lack of a permissive environment resulted in higher than anticipated security costs, which in turn, resulted in diverting planned reconstruction resources and led to canceling or reducing the scope of certain reconstruction projects. As we reported in July 2005, U.S. civilian agencies and the reconstruction contractors we evaluated generally obtained security services from private security providers. We noted that the use of private security providers reflected, in part, the fact that providing security was not part of the U.S. military’s stated mission. We also found, however, that despite significant role played by private security providers, U.S. agencies generally did not have complete data on the costs associated with their use. In June 2006, we reported that the agencies had agreed to include requirements for reconstruction contractors to report all costs for private security supplies and services that the contractor or any subcontractor may have to acquire necessary for successful contractor performance. Agency procurement personnel generally had limited advance warning prior to awarding the initial reconstruction contracts and were uncertain as to the full scope of reconstruction activities that were required. The need to award contracts and begin reconstruction efforts quickly contributed to DOD using business arrangements that potentially increased DOD’s risks. Such arrangements included allowing contractors to begin work before agreeing on what needed to be done and at what price and, during the initial stages of reconstruction, awarding contracts that were not awarded under full and open competition. To produce desired outcomes within available funding and required time frames, DOD and its contractors need to clearly understand reconstruction objectives and how they translate into the contract’s terms and conditions: the goods or services needed, the level of performance or quality desired, the schedule, and the cost. When requirements were not clear, DOD often entered into contract arrangements that posed additional risks, in particular by authorizing contractors to begin work before key terms and conditions, including the work to be performed and its projected costs, were fully defined. For example, In 2004, we issued two reports that identified a considerable amount of work that was being undertaken in Iraq as undefinitized contract actions. For example, we reported that as of March 2004, about $1.8 billion had been obligated on reconstruction contract actions without DOD and the contractors reaching agreement on the final scope and price of the work. Similarly, we found that as of June 2004, the Army and the contractor had definitized only 13 of the 54 task orders on the LOGCAP contract that required definitization. The lack of definitization contributed to the Army’s inability to conduct award fee boards to assess the contractor’s performance. In September 2005, we reported that difficulties in defining the cost, schedule, and work to be performed associated with projects in the water sector contributed to project delays and reduced scopes of work. We reported that DOD had obligated about $873 million on 24 task orders to rebuild Iraq’s water and sanitation infrastructure, including municipal water supplies, sewage collection systems, dams, and a major irrigation project. We found, however, that agreement between the government and the contractors on the final cost, schedule, and scope of 18 of the 24 task orders we reviewed had been delayed. These delays occurred, in part, because Iraqi authorities, U.S. agencies, and contractors could not agree on scopes of work and construction details. For example, at one wastewater project, local officials wanted a certain type of sewer design that increased that project’s cost. In September 2006, we issued a report on how DOD addressed issues raised by the Defense Contract Audit Agency in audits of Iraq-related contract costs. In particular, we found that DOD contracting officials were less likely to remove the costs questioned by auditors if the contractor had already incurred these costs before the contract action was definitized. In one case, the Defense Contract Audit Agency questioned $84 million in an audit of a task order proposal for an oil mission. In this case, the contractor did not submit a proposal until a year after the work was authorized, and DOD and the contractor did not negotiate the final terms of the task order until more than a year after the contractor had completed the work. In the final negotiation documentation, the DOD contracting official stated that the payment of incurred costs is required for cost-type contracts, absent unusual circumstances. In contrast, in the few audit reports we reviewed where the government negotiated prior to starting work, we found that the portion of questioned costs removed from the proposal was substantial. The need to award contracts and begin reconstruction efforts quickly—a contributing factor to DOD’s use of undefinitized contract actions—also contributed to DOD using other than full and open competition during the initial stages of reconstruction. While full and open competition can be a tool to mitigate acquisition risks, DOD procurement officials had only a relatively short time—often only weeks—to award the first major reconstruction contracts. As a result, these contracts were generally awarded using other than full and open competition. We recently reported that our ability to obtain complete information on DOD reconstruction contract actions was limited because not all DOD components consistently tracked or fully reported this information. Nevertheless, for the data we were able to obtain, consisting of $7 billion, or 82 percent, of DOD’s total contract obligations between October 1, 2003, through March 31, 2006, DOD competed the vast majority of DOD’s contract obligations. An unstable contracting environment—when wants, needs, and contract requirements are in flux—also requires greater attention to oversight, which relies on a capable government workforce. Managing and assessing postaward performance entails various activities to ensure that the delivery of services meets the terms of the contract and requires adequate surveillance resources, proper incentives, and a capable workforce for overseeing contracting activities. If surveillance is not conducted, not sufficient, or not well documented, DOD is at risk of being unable to identify and correct poor contractor performance in a timely manner and potentially paying too much for the services it receives. We and others have reported on the impact of the lack of adequate numbers of properly trained acquisition personnel and high turnover rates on reconstruction efforts. For example, Our June 2004 report found that early contract administration challenges were caused, in part, by the lack of a sufficient number of personnel. Our September 2005 report on water and sanitation efforts found that frequent staff turnover affected both the definitization process and the overall pace and cost of reconstruction efforts. The Special Inspector General for Iraq Reconstruction found that one of the CPA’s critical shortcomings in personnel was the inadequate link between position requirements and necessary skills. In 2004, an interagency assessment team found that the number of contracting personnel was insufficient to handle the increased workload expected with the influx of fiscal year 2004 funding. In part, the CPA’s decision to award seven contracts in early 2004 to help better coordinate and manage the fiscal year 2004 reconstruction efforts recognized this shortfall. As a result, however, DOD is relying on these contractors to help manage and oversee the design-build contractors. DOD’s lack of capacity contributed to challenges in using interagency contracting vehicles in Iraq. In certain instances, rather than develop and award its own contracts, DOD used contracts already awarded by other agencies. While this practice may improve efficiency and timeliness, these contracts need to be effectively managed, and their use requires a higher than usual degree of business acumen and flexibility on part of the workforce. During the initial stages of reconstruction, we and the DOD Inspector General found instances in which DOD improperly used interagency contracts. For example, the Inspector General found that a DOD component circumvented contracting rules when awarding contracts on behalf of the CPA when using the General Services Administration’s federal supply schedule. The Inspector General cited DOD’s failure to plan for the acquisition support the CPA needed to perform its mission as contributing to this condition. Similarly, in April 2005 we reported that a lack of effective management controls—in particular insufficient management oversight and a lack of adequate training—led to breakdowns in the issuance and administration of task orders for interrogation and other services by the Department of the Interior on behalf of DOD. These breakdowns included: issuing 10 out of 11 task orders that were beyond the scope of underlying contracts, in violation of competition rules; not complying with additional DOD competition requirements when issuing task orders for services on existing contracts; not properly justifying the decision to use interagency contracting; not complying with ordering procedures meant to ensure best value for the government; and not adequately monitoring contractor performance. Because officials at Interior and the Army responsible for the orders did not fully carry out their roles and responsibilities, the contractor was allowed to play a role in the procurement process normally performed by the government. Further, the Army officials responsible for overseeing the contractor, for the most part, lacked knowledge of contracting issues and were not aware of their basic duties and responsibilities. In part, problems such as these contributed to our decision to designate management of interagency contracting a high-risk area in January 2005. To improve its capacity to plan and award contracts and manage contractor performance, DOD has merged the Project and Contracting Office with the U.S. Army Corps of Engineers’ Gulf Region Division. Additionally, DOD established the Joint Contracting Command–Iraq to consolidate and prioritize contracting activities and resolve contracting issues, among other things. As noted previously, DOD has also attempted to directly contract with Iraqi firms, rather than rely on the large U.S. design-build contracts that it had awarded in early 2004. Although DOD expects this approach will reduce costs, it will also likely increase the administrative and oversight burden on DOD’s workforce. Since the mid-1990s, our reports have highlighted the need for clear and comprehensive guidance for managing and overseeing the use of contractors who support deployed forces. As we reported in December 2006, DOD has not yet fully addressed this long-standing problem. Such problems are not new. In assessing LOGCAP implementation during the Bosnian peacekeeping mission in 1997, we identified weaknesses in the available doctrine on how to manage contractor resources, including how to integrate contractors with military units and what type of management and oversight structure to establish. We identified similar weaknesses when we began reviewing DOD’s use of contractors in Iraq. For example, in 2003 we reported that guidance and other oversight mechanisms varied widely at the DOD, combatant command, and service levels, making it difficult to manage contractors effectively. Similarly, in our 2005 report on private security contractors in Iraq, we noted that DOD had not issued any guidance to units deploying to Iraq on how to work with or coordinate efforts with private security contractors. Further, we noted that the military may not have a clear understanding of the role of contractors, including private security providers, in Iraq and of the implications of having private security providers on the battle space. Our prior work has shown that it is important for organizations to provide clear and complete guidance to those involved in program implementation. In our view, establishing baseline policies for managing and overseeing contractors would help ensure the efficient use of contractors in places such as Iraq. DOD took a noteworthy step to address some of these issues when it issued new guidance in 2005 on the use of contractors who support deployed forces. However, as our December 2006 report made clear, DOD’s guidance does not address a number of problems we have repeatedly raised—such as the need to provide adequate contract oversight personnel, to collect and share lessons learned on the use of contractors supporting deployed forces, and to provide DOD commanders and contract oversight personnel with training on the use of contractors overseas prior to their deployment. In addition to identifying the lack of clear and comprehensive guidance for managing contract personnel, we have issued several reports highlighting the need for DOD components to comply with departmental guidance on the use of contractors. For example, in our June 2003 report we noted that DOD components were not complying with a long-standing requirement to identify essential services provided by contractors and develop backup plans to ensure the continuation of those services during contingency operations should contractors become unavailable to provide those services. We believe that risk is inherent when relying on contractors to support deployed forces, and without a clear understanding of the potential consequences of not having the essential service available, the risks associated with the mission increase. In other reports, we highlighted our concerns over DOD’s planning for the use of contractor support in Iraq—including the need to comply with guidance to identify operational requirements early in the planning process. When contractors are involved in planning efforts early, and given adequate time to plan and prepare to accomplish their assigned missions, the quality of the contractor’s services improves and contract costs may be lowered. DOD’s October 2005 guidance on the use of contractor support to deployed forces went a long way to consolidate existing policy and provide guidance on a wide range of contractor issues. However, as of December 2006, we found little evidence that DOD components were implementing that guidance, in part because no individual within DOD was responsible for reviewing DOD and service efforts to ensure the guidance was being consistently implemented. We have made a number of recommendations for DOD to take steps to establish clear leadership and accountability for contractor support issues. For example, in our 2005 report on LOGCAP we recommended DOD designate a LOGCAP coordinator with the authority to participate in deliberations and advocate for the most effective and efficient use of the LOGCAP contract. Similarly, in our comprehensive review of contractors on the battlefield in 2006, we recommended DOD appoint a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics—at a sufficiently senior level and with the appropriate resources—dedicated to leading DOD’s efforts to improve its contract management and oversight. DOD agreed with these recommendations. In October 2006, DOD established the office of the Assistant Deputy Under Secretary of Defense for Program Support to serve as the office of primary responsibility for contractor support issues, but the office’s specific roles and responsibilities have not yet been clearly defined. DOD continues to lack the capability to provide senior leaders and military commanders with complete information on support provided by contractors to deployed forces. Without such visibility, senior leaders and military commanders cannot develop a complete picture of the extent to which they rely on contractors to support their operations. We first reported the need for better visibility in 2002 during a review of the costs associated with U.S. operations in the Balkans. At that time, we reported that DOD was unaware of (1) the number of contractors operating in the Balkans, (2) the tasks those contractors were contracted to do, and (3) the government’s obligations to those contractors under the contracts. We noted a similar situation in 2003 in our report on DOD’s use of contractors to support deployed forces in Southwest Asia and Kosovo. At that time, we reported that although most contract oversight personnel had visibility over the individual contracts for which they were directly responsible, visibility of all contractor support at a specific location was practically nonexistent at the combatant commands, component commands, and deployed locations we visited. As a result, commanders at deployed locations had limited visibility and understanding of all contractor activity supporting their operations and frequently had no easy way to get answers to questions about contractor support. This lack of visibility inhibited the ability of commanders to resolve issues associated with contractor support such as force protection issues and the provision of support to the contractor personnel. Moreover, in our December 2006 review of DOD’s use of contractors in Iraq, we found that DOD’s continuing problems with limited visibility over contractors in Iraq unnecessarily increased contracting costs to the government and introduced unnecessary risk. Without visibility over where contractors are deployed and what government support they are entitled to, costs to the government may increase; for example, at a contractor accountability task force meeting we attended, an Army Materiel Command official noted that an Army official estimated that about $43 million is lost each year on free meals provided to contractor employees at deployed locations who also receive a per diem food allowance. Also, when senior military leaders began to develop a base consolidation plan, officials were unable to determine how many contractors were deployed and therefore ran the risk of over- or under-building the capacity of the consolidated bases. DOD’s October 2005 guidance on contractor support to deployed forces included a requirement that the department develop or designate a joint database to maintain by- name accountability of contractors deploying with the force and a summary of the services or capabilities they provide. The Army has taken the lead in this effort, and recently DOD designated a database intended to provide improved visibility over contractors deployed to support the military in Iraq, Afghanistan, and elsewhere. As I previously noted, having the capacity to manage and assess contractor performance is a critical factor in promoting successful outcomes, yet as we reported in December 2006, DOD does not have sufficient numbers of trained contractor management and oversight personnel at deployed locations. Such personnel include not only the contracting officers who award contracts, but also those personnel who define the requirements, receive or benefit from the services obtained, and monitor contractor performance. The lack of an adequate number of trained personnel limits DOD’s ability to obtain a reasonable assurance that contractors are meeting contract requirements efficiently and effectively. Several contract oversight personnel stated that DOD does not have adequate personnel at deployed locations to effectively oversee and manage contractors. For example, an Army official acknowledged that the Army is struggling to find the capacity and expertise to provide the contracting support needed in Iraq. In addition, officials responsible for contracting with Multinational Forces-Iraq stated that they did not have enough contract oversight personnel and quality assurance representatives to allow the organization to reduce the Army’s use of the LOGCAP contract by awarding more sustainment contracts for base operations support in Iraq. Similarly, a LOGCAP program official noted that if adequate staffing had been in place, the Army could have realized substantial savings on the LOGCAP contract through more effective reviews of new requirements. Finally, the contracting officer’s representative for an intelligence support contract in Iraq stated that he was unable to visit all of the locations that he was responsible for overseeing. The inability of contract oversight personnel to visit all the locations they are responsible for can create problems for units that face difficulties resolving contractor performance issues at those locations. For example, officials from a brigade support battalion stated that they had several concerns with the performance of a contractor that provided maintenance for the brigade’s mine-clearing equipment. These concerns included delays in obtaining spare parts and a disagreement over the contractor’s obligation to provide support in more austere locations in Iraq. According to the officials, their efforts to resolve these problems in a timely manner were hindered because the contracting officer’s representative was located in Baghdad while the unit was stationed in western Iraq. In other instances, some contract oversight personnel may not even reside within the theater of operations. For example, we found the Defense Contract Management Agency’s (DCMA) legal personnel responsible for LOGCAP in Iraq were stationed in Germany, while other LOGCAP contract oversight personnel were stationed in the United States. According to a senior DCMA official in Iraq, relying on support from contract oversight personnel outside the theater of operations makes resolving contractor performance issues more difficult for military commanders in Iraq, who are operating under the demands and higher operational tempo of a contingency operation in a deployed location. Our work has also shown the need for better predeployment training for military commanders and contract oversight personnel on the use of contractor support since the mid-1990s. Training is essential for military commanders because of their responsibility for identifying and validating requirements to be addressed by the contractor. In addition, commanders are responsible for evaluating the contractor’s performance and ensuring the contract is performed in an economic and efficient manner. Similarly, training is essential for DOD contract oversight personnel who monitor the contractor’s performance for the contracting officer. As we reported in 2003, military commanders and contract management and oversight personnel we met in the Balkans and throughout Southwest Asia frequently cited the need for better preparatory training. Additionally, in our 2004 review, we reported that many individuals using support contracts such as LOGCAP were unaware that they had any contract management or oversight roles. Army customers stated that they knew nothing about LOGCAP before their deployment and that they had received no predeployment training regarding their roles and responsibilities in ensuring that the contract was used economically and efficiently. In 2005, we reported that military units did not receive specific predeployment training or guidance about working with private security providers. In our December 2006 report, we noted also that many officials responsible for contract management and oversight in Iraq told us they received little or no training on the use of contractors prior to their deployment, which led to confusion over their roles and responsibilities. For example, in several instances, military commanders attempted to direct or ran the risk of directing a contractor to perform work outside the scope of the contract, even though commanders are not authorized to do so. Such cases can result in increased costs to the government. Over the years, we have made several recommendations to DOD intended to strengthen this training. Some of our recommendations were aimed at improving the training of military personnel on the use of contractor support at deployed locations, while others focused on training regarding specific contracts, such as LOGCAP, or the role of private security providers. Our recommendations have sought to ensure that military personnel deploying overseas have a clear understanding of the role of contractors and the support the military provides to them. DOD has agreed with most of our recommendations. However, we continue to find little evidence that DOD has improved training for military personnel on the use of contractors prior to their deployment. The security situation continues to deteriorate, impeding the management and execution of reconstruction efforts. To improve this condition, the United States is, among other things, (1) training and equipping Iraqi security forces that will be capable of leading counterinsurgency operations, and (2) transferring security responsibilities to Iraqi forces and the Iraqi government as capabilities improve. Although progress has been made in transferring more responsibilities to the Iraqi security forces, the capabilities of individual units are uncertain. Since the fall of 2003, the U.S.-led multinational force in Iraq has developed and refined a series of plans to transfer security responsibilities to the Iraqi government and security forces, with the intent of creating conditions that would allow a gradual drawdown of the 140,000 U.S. military personnel in Iraq. This security transition was to occur first in conjunction with the neutralization of Iraq’s insurgency and second with the development of Iraqi forces and government institutions capable of securing their country. DOD and the State Department have reported progress in implementing the current security transition plan. For example, the State Department has reported that the number of trained and equipped Iraqi army and police forces has increased from about 174,000 in July 2005 to about 323,000 in December 2006. DOD and the State Department also have reported progress in transferring security responsibilities to Iraqi army units and provincial governments. For example, the number of Iraqi army battalions in the lead for counterinsurgency operations has increased from 21 in March 2005 to 89 in October 2006. In addition, 7 Iraqi army division headquarters and 30 brigade headquarters had assumed the lead by December 2006. Moreover, by mid-December 2006, three provincial governments—Muthanna, Dhi Qar, and Najaf—had taken over security responsibilities for their provinces. The reported progress in transferring security responsibilities to Iraq, however, has not led to improved security conditions. Since June 2003, overall security conditions in Iraq have deteriorated and grown more complex, as evidenced by the increased numbers of attacks and more recent Sunni-Shi’a sectarian strife after the February 2006 bombing of the Golden Mosque in Samarra (see figure 2). Enemy-initiated attacks against the coalition and its Iraqi partners have continued to increase during 2006. For example, the average total attacks per day increased from about 70 per day in January 2006 to about 180 per day in October 2006. In December 2006, the attacks averaged about 160 per day. These attacks have increased around major religious and political events, such as Ramadan and elections. Coalition forces are still the primary target of attacks, but the number of attacks on Iraqi security forces and civilians also has increased since 2003. In October 2006, the State Department reported that the recent increase in violence has hindered efforts to engage with Iraqi partners and shows the difficulty in making political and economic progress in the absence of adequate security conditions. Further, because of the level of violence in Iraq, the United States has not been able to draw down the number of U.S. forces in Iraq as early as planned. For example, after the increase in violence and collapse of Iraqi security forces during the spring of 2004, DOD decided to maintain a force level of about 138,000 troops until at least the end of 2005, rather than reducing the number of troops to 105,000 by May 2004, as had been announced the prior fall. Subsequently, DOD reversed a decision to significantly reduce the U.S. force level during the spring of 2006 because Iraqi and coalition forces could not contain the rapidly escalating violence that occurred the following summer. Moreover, rather than moving out of urban areas, U.S. forces have continued to conduct combat operations in Baghdad and other cities in Iraq, often in conjunction with Iraqi security forces. As you know, DOD is in the process of providing additional forces to help stem violence in Iraq. Understanding the true capabilities of the Iraqi security forces is essential for the Congress to make fully informed decisions in connection with its authorization, appropriations, and oversight responsibilities. DOD and State provide Congress with weekly and quarterly reports on the progress made in developing capable Iraqi security forces and transferring security responsibilities to the Iraqi army and the Iraqi government. This information is provided in two key areas: (1) the number of trained and equipped forces, and (2) the number of Iraqi army units and provincial governments that have assumed responsibility for security of specific geographic areas. The aggregate nature of these reports, however, does not provide comprehensive information on the capabilities and needs of individual units. This information is found in unit-level Transition Readiness Assessment (TRA) reports. The TRA is a joint assessment, prepared monthly by the unit’s coalition commander and Iraqi commander. According to Multinational Force-Iraq guidance, the purpose of the TRA system is to provide commanders with a method to consistently evaluate units; it also helps to identify factors hindering unit progress, determine resource shortfalls, and make resource allocation decisions. These reports provide the coalition commander’s professional judgment on an Iraqi unit’s capabilities and are based on ratings in personnel, command and control, equipment, sustainment and logistics, training, and leadership. These reports also serve as the basis for the Multinational Force-Iraq’s determination of when a unit is capable of leading counterinsurgency operations and can assume security responsibilities for a specific area. DOD provided GAO with classified, aggregate information on overall readiness levels for the Iraqi security forces—including an executive-level brief—and information on units in the lead, but has not provided unit-level reports on Iraqi forces’ capabilities. GAO has made multiple requests for access to the unit-level TRA reports since January 2006. Nevertheless, as of last week, DOD still had not provided GAO unit-level TRA data, thereby limiting oversight over the progress achieved toward a critical objective. While the United States has spent billions of dollars rebuilding the infrastructure and developing Iraqi security forces, U.S. and World Bank assessments have found that the Iraqi government’s ability to sustain and maintain reconstruction efforts is hindered by several factors, including the lack of capacity in Iraq’s key ministries and widespread corruption, and the inability of the Iraqi government to spend its 2006 capital budget for key infrastructure projects. The United States has invested about $14 billion to restore essential services by repairing oil facilities, increasing electricity generating capacity, and restoring water treatment plants. For example, the U.S. Army Corps of Engineers reported that it had completed 293 of 523 planned electrical projects, including the installation of 35 natural gas turbines in Iraqi power generation plants. Additionally, reconstruction efforts have rebuilt or renovated schools, hospitals, border forts, post offices, and railway stations. Despite these efforts, a considerable amount of planned reconstruction work is not yet completed. DOD estimated that as of October 8, 2006, about 29 percent of the planned work remained to be completed, including some work that will not be completed until mid- to late 2008. The Iraqi government has had difficulty operating and sustaining the aging oil infrastructure and maintaining the new and rehabilitated power generation facilities. For example, Iraq’s oil production and exports have consistently fallen below their respective program goals. In 2006, oil production averaged 2.1 million barrels per day, compared with the U.S. goal of 3.0 million barrels per day. The Ministry of Oil has had difficulty operating and maintaining the refineries. According to U.S. officials, Iraq lacks qualified staff and expertise at the field, plant, and ministry levels, as well as an effective inventory control system for spare parts. According to the State Department, the Ministry of Oil will have difficulty maintaining future production levels unless it initiates an ambitious rehabilitation program. In addition, oil smuggling and theft of refined oil products have cost Iraq substantial resources. In 2006, electrical output reached 4,317 megawatts of peak generation per day, falling short of the U.S. goal of 6,000 megawatts. Prewar electrical output averaged 4,200 megawatts per day. Production also was outpaced by increasing demand, which has averaged about 8,210 megawatts per day. The Iraqi government has had difficulty sustaining the existing facilities. Problems include the lack of training, inadequate spare parts, and an ineffective asset management and parts inventory system. Moreover, plants are sometimes operated beyond their recommended limits, resulting in longer downtimes for maintenance. In addition, major transmission lines have been repeatedly sabotaged, and repair workers have been intimidated by anti-Iraqi forces. In part, these shortfalls can be traced to the lack of capacity within Iraq’s central government ministries. Iraqi government institutions are undeveloped and confront significant challenges in staffing a competent, non-aligned civil service; using modern technology and managing resources effectively; and effectively fighting corruption. According to U.S. and World Bank assessments, ministry personnel are frequently selected on the basis of political affiliation rather than competence or skills, and some ministries are under the authority of political parties hostile to the U.S. government. The Iraqi ministries also lack adequate technology and have difficulty managing their resources and personnel. For example, the World Bank reports that the Iraqi government pays salaries to nonexistent, or ghost, employees that are collected by other officials. According to U.S. officials, 20 to 30 percent of the Ministry of Interior staff are ghost employees. Further, corruption in Iraq is reportedly widespread and poses a major challenge to building an effective Iraqi government and could jeopardize future flows of needed international assistance. For example, a World Bank report notes that corruption undermines the government’s ability to make effective use of current reconstruction assistance. A 2006 survey by Transparency International ranked Iraq’s government as the second most corrupt government in the world. Moreover, between January 2005 and August 2006, 56 officials in Iraq’s ministries were either convicted of corruption charges or subject to arrest warrants. According to U.S. government and World Bank reports, the reasons for corruption in the Iraqi ministries are several, including the following: the absence of an effective Iraqi banking system leaves the government dependent on cash transactions; the majority of key Iraqi ministries have inadequately transparent, obsolete, or ambiguous procurement systems; and key accountability institutions, such as the inspectors general who were installed in each Iraqi ministry in 2004, lack the resources and independence to operate effectively and consistently. Corruption is also pervasive in the oil sector, a critical source of revenue for the Iraqi government. In 2006, the World Bank and the Ministry of Oil’s Inspector General estimated that millions of dollars of government revenue is lost each year to oil smuggling or diversion of refined products. According to State Department officials and reports, about 10 percent to 30 percent of refined fuels is diverted to the black market or is smuggled out of Iraq and sold for a profit. According to U.S. embassy documents, the insurgency has been partly funded by corrupt activities within Iraq and from skimming profits from black marketers. In addition, Iraq lacks fully functioning meters to measure oil production and exports, precluding control over the distribution and sale of crude and refined products. Sound government budgeting practices can help determine the priorities of the new government, provide transparency on government operations, and help decision makers weigh competing demands for limited resources. However, unclear budgeting and procurement rules have affected Iraq’s efforts to spend capital budgets effectively and efficiently, according to U.S. officials. The inability to spend the funds raises serious questions for the government, which has to demonstrate to skeptical citizens that it can improve basic services and make a difference in their daily lives. The U.S. government has launched a series of initiatives in conjunction with other donors to address this issue and improve the Iraqi government’s budget execution. When the Iraqi government assumed control over its finances in 2004, it became responsible for determining how more than $25 billion annually in government revenues would be collected and spent to rebuild the country and operate the government. Unclear budgeting and procurement rules have affected Iraq’s efforts to spend capital budgets effectively and efficiently, according to U.S. officials. As of August 2006, the government of Iraq had spent, on average, 14 percent of its 2006 capital projects budget (Iraq’s fiscal year begins on January 1 of each year). Some of the lowest rate of spending occurs at the Ministry of Oil, which relies on damaged and outdated infrastructure to produce the oil that provides nearly all of the country’s revenues (see table 1). Since most of the $34.5 billion in reconstruction funds provided between fiscal years 2003 and 2006 have been obligated, unexpended Iraqi funds represent an important source of additional financing. The capital goods budgets of the Interior and Defense ministries were intended for the purchase of weapons, ammunition, and vehicles, among other items. However, as of August 2006, Interior and Defense had spent only about 11 percent and 1 percent, respectively, of these budgeted funds. Further, according to U.S. and foreign officials, the ability of the Iraqi government to fund improvements in its oil and electricity sectors remain uncertain. For example, the Ministry of Oil has had difficulty operating and maintaining its aging infrastructure, including some refineries originally constructed in the 1950s, 1960s, and 1970s. While the Ministry of Oil’s $3.5 billion 2006 capital project’s budget targeted key enhancements to the country’s oil production, distribution, and export facilities, as of August 2006, the ministry had spent less than 1 percent of these budgeted funds. Similarly, Iraq’s electricity sector suffers from deteriorated, outdated, and inefficient infrastructure resulting from two decades of underinvestment in operations and maintenance, replacement, and expansion. This weakened infrastructure has led to unplanned outages. Despite the Ministry of Electricity’s recent development of a 10-year master plan, Iraq’s ability to fund improvements in its electricity sector remains uncertain. This uncertainty is due to low electricity tariffs, uncertain donor commitments, and according to a World Bank assessment, an inadequate legal and regulatory framework. As I have discussed today, there are a number of conditions that exist in Iraq that have led to, or will lead to, increased risk of fraud, waste, and abuse of U.S. funds. DOD’s extensive reliance on contractors to undertake reconstruction projects and provide support to deployed forces requires DOD to address long-standing challenges in an aggressive, effective manner. This reliance raises a broader question as to whether DOD has become too dependent on contractors to provide essential services without clearly identifying roles and responsibilities, and employing appropriate oversight and accountability mechanisms. Continuing reconstruction progress will require overall improvement in the security situation in Iraq. To do so, Iraqi security forces and provincial governments must be in a position to take responsibility for the security of their nation. At this time, their capacity to do so is questionable. Furthermore, the U.S. and the international community will need to support the Iraqi government’s efforts to enhance its capacity to govern effectively and efficiently if it is to make a positive difference in the daily lives of the Iraqi people. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members may have at this time. For questions regarding this testimony, please call Katherine V. Schinasi at (202) 512-4841. Other contributors to this statement were Ridge Bowman, Daniel Chen, Joseph Christoff, Carole Coffey, Lynn Cothern, Timothy DiNapoli, Whitney Havens, John Hutton, John Krump, Steve Lord, Steve Marchesani, Tet Miyabara, Judy McCloskey, Mary Moutsos, Ken Patton, Jim Reynolds, and William Solis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) has relied extensively on contractors to undertake major reconstruction projects and provide support to its deployed forces, but these efforts have not always achieved desired outcomes. Further, the Iraqi government must be able to reduce violence, sustain reconstruction progress, improve basic services, and make a positive difference in the daily lives of the Iraqi people. This statement discusses (1) factors affecting DOD's ability to promote successful acquisition outcomes on its contracts for reconstruction and for support to deployed forces in Iraq, (2) the deteriorating security situation and the capabilities of the Iraqi security forces, and (3) issues affecting the Iraqi government's ability to support and sustain future reconstruction progress. The testimony is based upon our work on Iraq reconstruction and stabilization efforts, DOD contracting activities, and DOD's use of support contractors spanning several years. This work was conducted in accordance with generally accepted government auditing standards. The challenges faced by DOD on its reconstruction and support contracts often reflect systemic and long-standing shortcomings in DOD's capacity to manage contractor efforts. Such shortcomings result from poorly defined or changing requirements, the use of poor business arrangements, the absence of senior leadership and guidance, and an insufficient number of trained contracting, acquisition and other personnel to manage, assess and oversee contractor performance. In turn, these shortcomings manifest themselves in higher costs to taxpayers, schedule delays, unmet objectives, and other undesirable outcomes. For example, because DOD authorized contractors to begin work before reaching agreement on the scope and price of that work, DOD paid millions of dollars in costs that were questioned by the Defense Contract Audit Agency. Similarly, DOD lacks visibility on the extent to which they rely on contractors to support their operations. When senior military leaders began to develop a base consolidation plan, officials were unable to determine how many contractors were deployed and therefore ran the risk of over- or under-building the capacity of the consolidated bases. U.S. reconstruction efforts also continue to be hampered by a security situation that continues to deteriorate. Although the number of trained and equipped Iraqi security forces increased to about 323,000 in December 2006 and more Iraqi Army units have taken the lead for counterinsurgency operations, attacks on coalition and Iraqi security forces and civilians have all increased. Aggregate numbers of trained and equipped Iraqi forces, however, do not provide information on the capabilities and needs of individual units. GAO has made repeated attempts to obtain unit-level Transition Readiness Assessments (TRAs) without success. This information is essential for the Congress to make fully informed decisions in connection with its authorization, appropriations, and oversight responsibilities. As the U.S. attempts to turn over its reconstruction efforts, the capacity of the Iraqi government to continue overall reconstruction progress is undermined by shortfalls in the capacity of the Iraqi ministries, widespread corruption and the inability to fund and execute projects for which funds were previously budgeted. Iraqi government institutions are undeveloped and confront significant challenges in staffing a competent, nonaligned civil service; using modern technology; and managing resources and personnel effectively. For example, according to U.S. officials 20 to 30 percent of the Ministry of Interior staff are "ghost employees" whose salaries are collected by other officials. Further, corruption in Iraq poses a major challenge to building an effective Iraqi government and could jeopardize future flows of needed international assistance. Unclear budgeting and procurement rules have affected Iraq's efforts to spend capital budgets effectively and efficiently, according to U.S. officials. At the Ministry of Oil, for example, less than 1 percent of the $3.5 billion budgeted in 2006 for key enhancements to the country's oil production, distribution, and export facilities, had been spent as of August 2006.
Before I go into detail regarding the Department of Education’s Year 2000 challenges, I would like to first discuss the Year 2000 issue in broader terms to put the department’s efforts into perspective. As the world’s most advanced and most dependent user of information technology, the United States possesses close to half of all computer capacity and 60 percent of Internet assets. Consequently, the upcoming change of century is a sweeping and urgent challenge for public-sector and private-sector organizations alike. For this reason we have designated the Year 2000 computing problem as a high-risk area for the federal government, and have published guidanceto help organizations successfully address the issue. To date, we have issued over 60 reports and testimony statements detailing specific findings and recommendations related to the Year 2000 readiness of a wide range of federal agencies. Our reviews of federal Year 2000 programs have found uneven progress, and our reports contain numerous recommendations, which the agencies have almost universally agreed to implement. Among them are the need to establish priorities, solidify data exchange agreements, and develop contingency plans. While progress has been made in addressing the federal government’s Year 2000 readiness, serious vulnerabilities remain and many agencies are behind schedule. The Department of Education’s mission is to ensure equal access to education and to promote educational excellence throughout the nation. To carry out this mission, it works with states, schools, communities, institutions of higher education, and financial institutions by providing grants to education agencies and institutions to strengthen teaching and student loans and grants to help pay the costs of postsecondary education; grants for literacy, employment, and self-sufficiency training for adults; enforcement of civil rights laws to ensure nondiscrimination by recipients of federal education funds; and support for research, development, evaluation, and dissemination of information to improve educational quality and effectiveness. The largest single federal elementary and secondary education grant program is title I of the Elementary and Secondary Education Act. This program serves educationally disadvantaged children through program-specific grants. The fiscal year 1997 appropriation for the disadvantaged was $7.3 billion. Student financial aid programs are administered by Education’s Office of Postsecondary Education (OPE) under title IV of the Higher Education Act of 1965, as amended. The department is responsible for the collection of more than $150 billion in outstanding loans, and its data systems track approximately 93 million student loans and 15 million grants. Four major types of student aid are currently in use: the Federal Family Education Loan Program (FFELP), the Federal Direct Loan Program (FDLP), the Federal Pell Grant Program, and campus-based programs. These programs together will make available about $51 billion to about 9 million students during the 1999-2000 academic year. FFELP and FDLP are the two largest postsecondary student loan programs, and Pell is the largest postsecondary grant program. FFELP provides student loans, through private lending institutions, that are guaranteed against default by some 36 guaranty agencies and insured by the federal government, while FDLP provides student loans directly from the federal government. Pell provides grants to disadvantaged students. In many ways, Education’s student financial aid delivery system is similar to functions performed in the banking industry, such as making loans, reporting account status, and collecting payments. As with the banks, the department faces a serious and complex challenge with the Year 2000 problem because of its heavy reliance on technology. The department currently maintains 11 major systems for administering student financial aid programs. These systems were developed independently over time by multiple contractors in response to new functions, programs, or mandates, resulting in a complex, highly heterogeneous systems environment. The systems range from legacy mainframes, several originally developed over 15 years ago, to recently developed client-server environments. The fiscal year 1998 budget to develop, operate, and maintain these systems is $311 million, and is expected to increase to $378 million in fiscal year 1999. According to Education’s own assessments of the severity of Year 2000 failures, the student financial aid delivery process could experience major problems unless all systems are compliant in time. These include delays in disbursements, such that lenders might not receive timely interest and allowance payments, if external data exchanges fail; reduction in the department’s ability to transfer payments, process applications for program benefits, or monitor program operations; risks that student financial aid programs may not function properly if they do not receive critical data for originating loans and for reporting payments and financial information; and risks that postsecondary education students may lack the ability to verify the current status of their loans or grants. To overcome these types of risks, Education must implement effective Year 2000 programs. An effective Year 2000 program requires the disciplined, coordinated application of scarce resources to an agencywide system conversion that must be completed by a fixed date, and an understanding of the wide range of dependencies among information systems. An organization can mitigate its risk of Year 2000 complications through a structured approach and rigorous program management. One generally accepted approach, presented in our Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14), has five phases: awareness — defining the problem and gaining executive-level support; assessment — inventorying and analyzing systems, and prioritizing their renovation — converting, replacing, or eliminating selected systems; validation — ensuring that all converted or replaced systems and interfaces will work in an operational environment; and implementation — deploying Year 2000-compliant systems and components, and implementing contingency plans, if necessary. Education was very slow to establish a comprehensive, timely Year 2000 program. One key factor contributing to this delay was the instability of the department’s Year 2000 project manager position, which suffered continual turnover. The department initially established the position in February 1995 to provide oversight and guidance to Education’s Year 2000 activities. During the 19-month tenure of the first project manager, a high-level Year 2000 briefing document (dated May 1996) was developed in response to a request from a congressional committee. At that time, Education estimated that it would complete a Year 2000 program strategy document and corresponding management plan by August 1996. However, no strategy document or management plan was developed, either by that deadline or during the 14-month tenure of the second project manager, which ended in September 1997. The third Year 2000 project manager, who spent only 3 months in the position, contracted with consultants to assist the department in developing a draft Year 2000 management plan. The fourth manager, during his 4-month tenure, initiated many awareness and assessment activities. Finally, the fifth project manager was assigned on March 30 of this year, and has continued the progress started by her predecessor. The frequency of turnover in project managers delayed Education in completing basic awareness activities. These activities included dedicating staff to the Year 2000 effort, communicating with data exchange partners, holding regular steering committee meetings, and developing a management plan. Project office staff to help the department coordinate Year 2000 activities were not assigned until December 1997, when the fourth project manager was appointed. In addition, while a draft management plan was distributed for comment in mid-November 1997, it was not made final until April of this year. Education experienced a corresponding delay with basic assessment activities, which it did not report as completed until this past March—about 9 months after the Office of Management and Budget (OMB) milestone. These assessment activities included conducting an enterprisewide inventory of information systems and data interfaces, assessing and prioritizing systems, establishing Year 2000 project teams for business areas and major systems, and initiating contingency planning. Concurrent with its slowness in completing its assessment, Education’s estimated costs have fluctuated widely. The initial cost estimate in May 1996 was $60 million, which decreased dramatically to $7 million in February 1997 but then rose again in February 1998 with an estimate of $23 million. Last month, the cost estimate increased further, to $38 million, as Education continued renovating and testing its systems. Prior to February 1998, little documentation existed supporting how these estimates were derived. Figure 1 highlights the wide variation in cost estimates over the past 2 years. With its slow start, Education has been playing catch up and working to accelerate its progress. Management staff have regular meetings to discuss progress on Year 2000 compliance, and principal office staff meet biweekly to discuss progress on individual mission-critical systems. The biweekly meetings include Education staff responsible for the particular system; the system contractor; Year 2000 program office staff; and contractor staff responsible for independently verifying and validating renovation, validation, and implementation activities. According to department officials, Year 2000 compliance has also now been given top priority in terms of in-house resources. Education’s Year 2000 management plan established the Year 2000 project manager as the focal point for monitoring progress, providing support, and directing the plan. The project manager works with the program offices, which are responsible for assessing and renovating their systems, as well as tracking and reporting progress on compliance activities. Senior leadership of each program office is responsible for providing adequate support to its Year 2000 tasks and ensuring that Year 2000 compliance is achieved. The Department of Education has reported to OMB that it has 14 mission-critical systems, of which 11 are student financial aid systems. Table 1 summarizes the Year 2000 status of each mission-critical system as of this month. In brief, according to the department’s September 10, 1998, report, four mission-critical systems have been implemented and are in operation, one is in the process of being implemented, five systems are being validated, and the remaining four are still being renovated. While there has been recent progress, the Department of Education must mitigate critical risks that affect its ability to award and track billions of dollars in student financial aid. Specifically, the department must address the need for adequate testing, the renovation and testing of data exchanges, and the development of business continuity and contingency plans. Unless these issues are effectively addressed, the ability of the department to deliver financial aid to students will be compromised. Complete and thorough Year 2000 testing is essential to providing reasonable assurance that new or modified systems process dates correctly and will not jeopardize an organization’s ability to perform core business operations after the turn of the century. Moreover, since the Year 2000 computing problem is so pervasive, the requisite testing is generally extensive and expensive. Experience shows that Year 2000 testing is consuming between 50 and 70 percent of a project’s time and resources. Agencies must not only test Year 2000 compliance of individual applications, but also the complex interactions among numerous converted or replaced computer platforms, operating systems, utilities, applications, databases, and interfaces. It is also important to work early and continually with an organization’s data exchange partners so that end-to-end testing can be effectively planned and executed. The Society for Information Management Year 2000 Working Group has noted that because many enterprises do not have experience with testing at this order of magnitude, the results will often be significant cost overruns and missed commitments. Indeed, for Education, the task ahead is formidable—it requires a cooperative, coordinated, and thorough testing process across the disparate systems in the student financial aid delivery network. Because of Education’s late start and the compression of its Year 2000 compliance schedule to meet the OMB deadline (mission-critical systems to be implemented by March 31, 1999), time available for key testing activities within the renovation, validation, and implementation phases for individual mission-critical systems is limited. In fact, in some cases, the schedule for Education’s mission-critical systems has less time allocated for the renovation and validation phases than was spent on assessment. These are large, often complex systems encompassing hundreds of thousands and, in some cases, millions of lines of software code. Accordingly, the limited amount of time available raises concerns about Education’s ability to complete essential testing in time. Department officials have acknowledged that completing testing activities within schedule will be difficult. Indeed, the schedule constraints placed on test activities for individual systems have already been shown to be unrealistic in several cases. For example, the schedule for 7 of the 14 mission-critical systems has recently been extended to allow more time for testing. Beyond the testing of individual mission-critical systems, Education will also have to devote a significant amount of time to end-to-end testing of its mission-critical business processes and supporting systems, such as those associated with student financial aid delivery. According to Education officials, the department plans to conduct such testing between January and March 1999, after all individual mission-critical systems have been certified as Year 2000 compliant. Tentatively from April to September 1999, external data exchange partners will have time periods available for testing their interfaces. However, no detailed plans currently exist for this testing. Education officials stated that they are working on these plans and intend to have them completed shortly, pending discussion with the student financial aid community. As 2000 approaches, organizations must be diligent in implementing measures to ensure that exchanging data across systems compromises neither the systems nor the data. Conflicting data exchange formats or data processed on noncompliant systems could introduce and propagate errors from one system to another. To mitigate this risk, organizations must inventory and assess their data exchanges, reach agreements with data exchange partners on how data will be exchanged, test and implement data exchange formats, develop and test bridges and filters to handle nonconforming data, and develop contingency plans in the event of failure. Education’s student financial aid data exchange environment is massive and complex. It includes about 7,500 schools, 6,500 lenders, and 36 guaranty agencies, as well as other federal agencies. Figure 2 provides an overview of this environment. To address its data exchanges with schools, lenders, and guaranty agencies, Education has dictated how the data that these entities provide to the department should be formatted. Education handles this in one of two ways: it either provides software to the entity, such as EDExpress (which specifies the format—including dates—for data exchanges), or provides the technical specifications for the entity to use in developing the necessary interface. Education has followed up on this approach with its data exchange partners by (1) developing memorandums of understanding with each guaranty agency and federal agency and (2) conducting outreach on Year 2000 awareness with schools. Regarding its outreach to schools, Education has shared information through memoranda (i.e., “Dear Colleague” letters), presentations at conferences and training sessions, and over the Internet. The “Dear Colleague” letters provide an overview of the Year 2000 issue and summarize the department’s approach for ensuring compliance of student financial aid systems. To further ensure that Education’s data exchange partners have indeed made their interfaces compliant, the department will need to engage in end-to-end testing of its mission-critical business processes, including data exchanges. As noted earlier, Education has not completed these end-to-end test plans. Further complicating data exchange compliance is that Education will need to ensure that the data it is receiving from its partners are not just formatted correctly but are accurate. As we have previously reported, Education has experienced serious data integrity problems in the past. To assess how educational institutions are progressing with their Year 2000 programs, the department recently conducted a survey of the Year 2000 readiness of postsecondary schools participating in the Direct Loan Program. The preliminary results are not encouraging: up to one-third of the schools did not even have a compliance plan in place. Given the challenges Education faces in making sure that all of its mission-critical systems are adequately tested and in addressing the complexities of the massive number of data exchanges, it will be difficult for the department to enter the new century without some problems. Therefore, it is critical that Education initiate the development of realistic contingency plans to ensure continuity of core business processes in the event of Year 2000-induced failures. Business continuity and contingency plans should be formulated to respond to two types of failure: those that can be predicted (e.g., systems renovations that are already far behind schedule) and those that are unforeseen (e.g., systems that fail despite having been certified Year 2000 compliant, or those that cannot be corrected by January 1, 2000, despite appearing to be on schedule today). Moreover, contingency plans that focus only on agency systems are inadequate. Federal agencies depend on data provided by their business partners as well as on services provided by the public infrastructure. Thus, one weak link anywhere in the chain of critical dependencies can cause major disruption. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Our guide on ensuring business continuity and contingency planning, issued last month, provides further detail on this. This guide describes four phases supported by agency Year 2000 program management: initiation, business impact analysis, contingency planning, and testing. Each phase represents a major Year 2000 business continuity planning project activity or segment. Education initiated contingency planning activities in February 1998. According to department officials, Education is committed to developing business continuity and contingency plans for each mission-critical business process and supporting systems. As part of this commitment, Education recently appointed a senior executive to manage the development and testing of continuity and contingency plans for student financial aid operations. The department expects to complete these plans by March 31, 1999. In summary, Mr. Chairman, the Department of Education’s endeavor to make its programs and supporting systems Year 2000 compliant is of urgent priority. Should critical student financial aid systems not be Year 2000 compliant in time, Education’s ability to control the award process could be compromised, with cascading effects reaching schools, students, guaranty agencies, and lenders. While the department has made progress in preparing its systems for the year 2000, initial delays have left it with significant risks—risks that must be effectively managed. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. Year 2000 Computing Crisis: Severity of Problem Calls for Strong Leadership and Effective Partnerships (GAO/T-AIMD-98-278, September 3, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Partnerships Needed to Reduce Likelihood of Adverse Impact (GAO/T-AIMD-98-277, September 2, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Partnerships Needed to Mitigate Risks (GAO/T-AIMD-98-276, September 1, 1998). Year 2000 Computing Crisis: State Department Needs To Make Fundamental Improvements To Its Year 2000 Program (GAO/AIMD-98-162, August 28, 1998). Year 2000 Computing: EFT 99 Is Not Expected to Affect Year 2000 Remediation Efforts (GAO/AIMD-98-272R, August 28, 1998). Year 2000 Computing Crisis: Avoiding Major Disruptions Will Require Strong Leadership and Effective Partnerships (GAO/T-AIMD-98-267, August 19, 1998). Year 2000 Computing Crisis: Strong Leadership and Partnerships Needed to Address Risk of Major Disruptions (GAO/T-AIMD-98-266, August 17, 1998). Year 2000 Computing Crisis: Strong Leadership and Partnerships Needed to Mitigate Risk of Major Disruptions (GAO/T-AIMD-98-262, August 13, 1998). FAA Systems: Serious Challenges Remain in Resolving Year 2000 and Computer Security Problems (GAO/T-AIMD-98-251, August 6, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, August 1998). Internal Revenue Service: Impact of the IRS Restructuring and Reform Act on Year 2000 Efforts (GAO/GGD-98-158R, August 4, 1998). Social Security Administration: Subcommittee Questions Concerning Information Technology Challenges Facing the Commissioner (GAO/AIMD-98-235R, July 10, 1998). Year 2000 Computing Crisis: Actions Needed on Electronic Data Exchanges (GAO/AIMD-98-124, July 1, 1998). Defense Computers: Year 2000 Computer Problems Put Navy Operations at Risk (GAO/AIMD-98-150, June 30, 1998). Year 2000 Computing Crisis: A Testing Guide (GAO/AIMD-10.1.21, Exposure Draft, June 1998). Year 2000 Computing Crisis: Testing and Other Challenges Confronting Federal Agencies (GAO/T-AIMD-98-218, June 22, 1998). Year 2000 Computing Crisis: Telecommunications Readiness Critical, Yet Overall Status Largely Unknown (GAO/T-AIMD-98-212, June 16, 1998). GAO Views on Year 2000 Testing Metrics (GAO/AIMD-98-217R, June 16, 1998). IRS’ Year 2000 Efforts: Business Continuity Planning Needed for Potential Year 2000 System Failures (GAO/GGD-98-138, June 15, 1998). Year 2000 Computing Crisis: Actions Must Be Taken Now to Address Slow Pace of Federal Progress (GAO/T-AIMD-98-205, June 10, 1998). Defense Computers: Army Needs to Greatly Strengthen Its Year 2000 Program (GAO/AIMD-98-53, May 29, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Securities Pricing: Actions Needed for Conversion to Decimals (GAO/T-GGD-98-121, May 8, 1998). Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Air Traffic Control: FAA Plans to Replace Its Host Computer System Because Future Availability Cannot Be Assured (GAO/AIMD-98-138R, May 1, 1998). Year 2000 Computing Crisis: Potential for Widespread Disruption Calls for Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed year 2000 (Y2K) computing crisis risks to the Department of Education, focusing on: (1) student financial aid systems; (2) actions the department has taken in recent months to address these risks; and (3) key issues the department must deal with if its systems are to be ready for the century change: testing of systems, exchanging data with internal and external partners, and developing business continuity and contingency plans. GAO noted that: (1) Education faces major risks that Y2K failures could severely disrupt the student financial aid delivery process, including delaying disbursements and application processing; (2) further, because of systems interdependencies, repercussions from Y2K-related problems could be felt throughout the student financial aid community--a network including students, institutions of higher educations, financial organizations, and other government agencies; (3) the department was very slow in implementing a comprehensive Y2K program to address these risks--basic awareness and assessment tasks were not completed until recently; (4) Education is now accelerating its program, but with the slow start, it remains in a position of playing catch up; (5) accordingly, the department has major challenges ahead but limited time remaining to adequately deal with them; and (6) therefore, it must also focus on developing appropriate contingency plans to ensure business continuity in the event of key systems failures.
The United States’ nuclear weapons stockpile comprises nine nuclear weapons types, all of which were designed during the Cold War. Two of these systems—the B61 and the W76—are currently being refurbished to extend their useful lives for up to 30 years through NNSA’s Life Extension Program. In May 2008, we reported that, over the past few years, NNSA and DOD have considered a variety of scenarios for the future composition of the nuclear stockpile that would be based on different stockpile sizes and the degree to which the stockpile would incorporate new RRW designs. For example, NNSA and DOD have considered how large the stockpile needs to be in order to maintain a sufficiently robust and responsive manufacturing infrastructure to respond to future global geopolitical events. In addition, NNSA and DOD have considered the number of warheads that will need to be either refurbished or replaced in the coming decades. However, NNSA and DOD have not issued requirements defining the size and composition of the future stockpile. We discussed one effect of this lack of clear stockpile requirements in our May 2008 report on plutonium pit manufacturing. Specifically, we found that in October 2006, NNSA proposed building a new, consolidated plutonium center at an existing DOE site that would be able to manufacture pits at a production capacity of 125 pits per year. However, by December 2007, NNSA stated that instead of building a new, consolidated plutonium center, its preferred action was to upgrade the existing pit production building at LANL to produce up to 80 pits per year. Although DOD officials agreed to support NNSA’s plan, these officials also stated that future changes to stockpile size, military requirements, and risk factors may ultimately lead to a revised, larger rate of production. This uncertainty has delayed NNSA in issuing final plans for its future pit manufacturing capability. Once a decision is made about the size and composition of the stockpile, NNSA should develop accurate estimates of the costs of transforming the nuclear weapons complex. In September 2007, a contractor provided NNSA with a range of cost estimates for over 10 different Complex Transformation alternatives. For example, the contractor estimated that the cost of NNSA’s preferred action would be approximately $79 billion over the period 2007 through 2060. This option was also determined to be the least expensive. In contrast, the contractor’s estimate for a consolidated nuclear production center—another alternative that would consolidate plutonium, uranium, and weapons assembly and disassembly at one site—totaled $80 billion over the same period. Although these estimates differ by only $1 billion over 53 years, they are based on significantly different assumptions. Specifically, NNSA’s preferred action assumes a manufacturing capacity of up to 80 pits per year, and the alternative for a consolidated nuclear production center assumes a capacity of 125 pits per year. In addition, the contractor cautioned that because its cost analysis was not based on any specific conceptual designs for facilities such as the consolidated nuclear production center, it had not developed cost estimates for specific projects. As a result, the contractor stated that its estimates should not be used to predict a budget-level project cost. Historically, NNSA has had difficulty developing realistic, defensible cost estimates, especially for large, complex projects. For example, in March 2007, we found that 8 of the 12 major construction projects that DOE and NNSA were managing had exceeded their initial cost estimates. One project, the Highly Enriched Uranium Materials Facility nearing completion at the Y-12 Plant, has exceeded its original cost estimate by over 100 percent, or almost $300 million. We reported that the reasons for this cost increase included poor management and contractor oversight. In addition, NNSA’s cost estimate for constructing the Chemistry and Metallurgy Research Replacement Facility has more than doubled—from $838 million to over $2 billion—since our April 2006 testimony. This revised cost estimate is so uncertain that NNSA did not include any annual cost estimates beyond fiscal year 2009 in its fiscal year 2009 budget request to the Congress. Finally, the preliminary results of our ongoing review of NNSA’s Life Extension Program for this Subcommittee show that NNSA’s cost estimate for refurbishing each B61 nuclear bomb has doubled since 2002. NNSA does not expect to issue a record of decision on Complex Transformation until later this year. As a result, we do not know the ultimate decision that NNSA will make—whether to modernize existing sites in the weapons complex or consolidate operations at new facilities. We expect that once NNSA makes this decision, NNSA will put forward a transformation plan with specific milestones to implement its decision. Without such a plan, NNSA will have no way to evaluate its progress, and the Congress will have no way to hold NNSA accountable. However, over the past decade, we have repeatedly documented problems with NNSA’s process for planning and managing its activities. For example, in a December 2000 report, we found that NNSA needed to improve its planning process so that there were linkages between individual plans across the Stockpile Stewardship Program and that the milestones contained in NNSA’s plans were reflected in contractors’ performance criteria and evaluations. However, in February 2006, we reported similar problems with how NNSA is managing the implementation and reliability of the nuclear stockpile. Specifically, we found that NNSA planning documents did not contain clear, consistent milestones or a comprehensive list of the scientific research being conducted across the weapons complex in support of NNSA’s Primary and Secondary Assessment Technologies programs. These programs are responsible for setting the requirements for the computer codes and experimental data needed to assess and certify the safety and reliability of nuclear warheads. We also found that NNSA had not established adequate performance measures to determine the progress of the weapons laboratories in developing and implementing this new methodology. As we noted in July 2003, one of the key practices for successfully transforming an organization is to ensure that top leadership sets the direction, pace, and tone for the transformation. One of the key problems that NNSA has experienced has been its inability to build an organization with clear lines of authority and responsibility. We also reported in January 2004 that NNSA, as a result of reorganizations, has shown that it can move from what was often called a “dysfunctional bureaucracy” to an organization with clearer lines of authority and responsibility. In this regard, we stated in our April 2006 testimony that NNSA’s proposed Office of Transformation needed to be vested with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement those decisions. It was our view that the Office of Transformation should (1) report directly to the Administrator of NNSA; (2) be given sufficient authority to conduct its studies and implement its recommendations; and (3) be held accountable for creating real change within the weapons complex. In 2006, NNSA created an Office of Transformation to oversee its Complex Transformation efforts. This office has been involved in overseeing early activities associated with Complex Transformation, such as the issuance of the December 2007 draft report on the potential environmental impacts of alternative Complex Transformation actions. However, the Office of Transformation does not report directly to the Administrator of NNSA. Rather, the Office reports to the head of NNSA’s Office of Defense Programs. In this respect, we are concerned that the Office of Transformation may not have sufficient authority to set transformation priorities for all of NNSA, specifically as they affect nuclear nonproliferation programs. Because NNSA’s ultimate decision on the path forward for Complex Transformation has not yet been made, it remains to be seen whether the office has sufficient authority to enforce transformation decisions or whether it will be held accountable within NNSA for these decisions. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. For further information on this testimony, please contact me at (202) 512- 3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Ryan T. Coles, Assistant Director; Allison Bawden; Jason Holliday; Leland Cogliani; Marc Castellano; and Carol Herrnstadt Shulman made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. This published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past several years, a serious effort has begun to comprehensively reevaluate how the United States maintains its nuclear deterrent and what the nation's approach should be for transforming its aging nuclear weapons complex. The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy (DOE), is responsible for overseeing this weapons complex, which comprises three nuclear weapons design laboratories, four production plants, and the Nevada Test Site. In December 2007, NNSA issued a draft report on potential environmental impacts of alternative actions to transform the nuclear weapons complex, which NNSA refers to as Complex Transformation. NNSA's preferred action is to establish a number of "distributed centers of excellence" at sites within the existing nuclear weapons complex, including the Los Alamos National Laboratory for plutonium capabilities, the Y-12 Plant for uranium capabilities, and the Pantex Plant for weapons assembly, disassembly, and high explosives manufacturing. NNSA would continue to operate these facilities to maintain and refurbish the existing nuclear weapons stockpile as it makes the transition to a smaller, more responsive infrastructure. GAO was asked to discuss NNSA's Complex Transformation proposal. This testimony is based on previous GAO work. Transforming the nuclear weapons complex will be a daunting task. In April 2006 testimony before the Subcommittee on Energy and Water Development, House Committee on Appropriations, GAO identified four actions that, in its view, were critical to successfully achieving the transformation of the complex. On the basis of completed and ongoing GAO work on NNSA's management of the nuclear weapons complex, GAO remains concerned about NNSA's and the Department of Defense's (DOD) ability to carefully and fully implement these four actions. For this reason, GAO believes that the Congress must remain vigilant in its oversight of Complex Transformation. Specifically, NNSA and DOD have not established clear, long-term requirements for the nuclear weapons stockpile. While NNSA and DOD have considered a variety of scenarios for the future composition of the nuclear weapons stockpile, no requirements have been issued. It is GAO's view that NNSA will not be able to develop accurate cost estimates or plans for Complex Transformation until stockpile requirements are known. Further, recent GAO work found that the absence of stockpile requirements is affecting NNSA's plans for manufacturing a critical nuclear weapon component. NNSA has had difficulty developing realistic cost estimates for large, complex projects. In September 2007, a contractor provided NNSA with a range of cost estimates for over 10 different Complex Transformation alternatives. However, the contractor stated that (1) its analysis was based on rough order-of-magnitude estimates and (2) NNSA should not use its cost estimates to predict budget-level project costs. In addition, in March 2007 GAO reported that 8 of 12 major construction projects being managed by DOE and NNSA had exceeded their initial cost estimates. NNSA will need to develop a transformation plan with clear, realistic milestones. GAO expects that once NNSA decides the path forward for Complex Transformation later this year, NNSA will put forward such a plan. However, GAO has repeatedly documented problems with NNSA's ability to implement its plans. For example, in February 2006 GAO reported problems with the planning documents that NNSA was using to manage the implementation of its new approach for assessing and certifying the safety and reliability of the nuclear stockpile. Successful transformation requires strong leadership. In 2006, NNSA created an Office of Transformation to oversee its Complex Transformation activities. However, GAO is concerned that the Office of Transformation may not have sufficient authority to set transformation priorities for all of NNSA, specifically as they affect nuclear nonproliferation programs.
When borrowers default on single-family mortgages insured by HUD, the Department encourages lenders to work with the borrowers to bring their mortgage payments up to date. If that is not possible, the homes may be sold to third parties, voluntarily conveyed to the lenders, or surrendered to the lenders through foreclosure. When lenders obtain properties, they generally convey them to HUD in exchange for payment of an insurance claim. HUD also takes possession of abandoned properties secured by HUD-held mortgages and protects and maintains these properties, referred to as “custodial” properties, pending acquisition of title. HUD has the largest real estate portfolio and operation in the nation, selling approximately 55,000 properties each year. The Department estimates that at any given time, its inventory averages about 30,000 properties. The properties remain in HUD’s inventory an average of approximately 6 months. As of September 30, 1997, HUD owned 29,898 single-family properties. The inventory included 1,784 custodial properties as of December 12, 1997. Custodial properties remain in inventory an average of 2-1/2 years, but some have been in HUD’s inventory for more than 8 years. HUD manages and sells properties in inventory under its Real Estate Owned program. Each of HUD’s 73 field offices currently managing single-family properties may use one REAM contractor to manage its entire inventory or allocate the properties in its inventory among multiple REAM contractors. Of the three field offices we visited, both the Massachusetts and Texas state offices have one REAM contractor, whereas the Illinois State Office divides its inventory into geographic regions and issues separate contracts for each region; at the time of our review, Illinois had nine REAM contractors with 23 contracts. REAM contracts are awarded by HUD contracting staff in three administrative service centers, using a standard format provided by HUD headquarters. Provisions or clauses may be added to that format to meet the needs of a particular geographic region, but requirements may not be deleted. The standard base contract term is 1 year, plus two 1-year options that HUD can use to extend the contract. Contractors are paid a flat fee for maintaining and managing each property in HUD’s inventory or custody. HUD pays this fee in two installments; the standard contract fee is 30 percent when the property is listed for sale, and the remaining 70 percent is paid when the property is sold. However, some field offices have deviated from these standard terms. For example, the Fort Worth office’s contract term is 1 year with three 1-year options, and 60 percent of the maintenance and management fee is paid when the property is listed for sale, and the remaining 40 percent is paid when the property is sold. Paying the fee in installments is designed to encourage contractors to maintain the properties well and expedite property sales. HUD’s field office staff are responsible for overseeing REAM contractors. Day-to-day contract administration is performed by the staff member in the Single-Family Housing Division who is designated as the government’s technical representative on the contract. At the beginning of fiscal year 1997, the total value of HUD’s active REAM contracts was approximately $165 million. In addition, in September 1996, HUD entered into pilot contracts with one corporation to test the approach of contracting out all management and marketing functions associated with HUD’s inventory of acquired single-family properties at three field offices—the Maryland and Louisiana state offices and the Sacramento Area Office—that, according to HUD’s Single-Family Property Disposition Director, were already understaffed in relation to the sizes of their inventories. The property disposition pilot contracts were worth an additional $22.4 million. Since 1991, HUD’s Inspector General (IG) has repeatedly identified problems with the administration of property management contracts in some field offices. Among other problems, the IG identified instances in which (1) HUD was being billed and was paying for services that contractors and subcontractors never provided; (2) field staff were not making routine inspections of acquired properties; (3) field staff were deviating from procurement policies and procedures; and (4) HUD’s files were so poorly maintained that it was impossible to document or evaluate contractors’ performance. We found that HUD does not have a system in place for monitoring its field offices’ administration of REAM contracts. In addition, HUD’s field office staff are not consistently providing adequate oversight of the REAM contractors. Key oversight responsibilities that were not always performed by staff at the three HUD field offices we visited included (1) conducting periodic risk-based monitoring and carrying out performance evaluations before extending REAM contracts; (2) maintaining fully documented files on REAM contractors’ performance; (3) inspecting a percentage of properties in the contractors’ inventory; (4) ensuring that the contractors submit appropriate property inspection reports to HUD; and (5) ensuring the preservation and protection of custodial properties. HUD’s property disposition handbook gives headquarters staff the ultimate responsibility for overseeing the administration of REAM contracts. Specifically, the handbook requires regional offices to ensure that field offices are consistently and uniformly monitoring REAM contractors. To ensure that this task is being performed, the guidance requires headquarters staff to review regional offices’ oversight actions through regional reviews. We found, however, that headquarters staff have not been conducting these reviews since HUD reorganized its field office structure in 1995 and eliminated the regional offices. According to HUD Single-Family Property Disposition officials, the regional offices’ oversight function was never absorbed into headquarters after the regional offices were eliminated. Also, after the reorganization, HUD’s guidance was not updated to ensure that REAM contract administration was monitored by headquarters. HUD requires field offices to perform an assessment of a contractor’s risk of unsatisfactory performance, on the basis of such factors as the timeliness with which the contractor carries out duties under the contract, the frequency of complaints about the condition of properties managed by the contractor, and the contractor’s fiscal and subcontracting procedures. At the conclusion of this review, staff are to assign the contractor a risk designation of low, moderate, or high, which determines the frequency of HUD’s future monitoring activities, ranging from monthly for high-risk contractors to semiannually for low-risk contractors. We found that compliance with this requirement varied among the three field offices we reviewed. Specifically, the Fort Worth office has conducted risk assessment reviews and on-site monitoring as required. On the other hand, the Boston office has not been doing the risk assessment reviews. As a result, Boston officials (1) lack information about whether the contractor’s accounting, recordkeeping, and subcontracting practices comply with the terms of its contract and (2) have no basis for determining the appropriate frequency of future on-site monitoring. According to a 1996 report by HUD’s IG, the Boston office also did not conduct risk assessments for the prior REAM. Also, although the Chicago office has carried out the risk assessment reviews, it has not completed them as often as required, nor monitored the REAM contractors as frequently as their risk assessments indicate that they should be. Specifically, each of the nine REAM contractors under contract with the Chicago office at the time of our review should have had an on-site monitoring review at least every 6 months because the minimum frequency of reviews required by HUD is semiannual (REAM contractors designated moderate- or high-risk are to be reviewed more frequently). However, at the time of our review, five of the nine REAM contractors had not had an on-site monitoring review in over 8 months, and one of these had not had an on-site review in 16 months. The other four REAM contractors had on-site reviews within a week before we began our file review. However, for two of those contractors, there was no evidence that HUD had ever completed an on-site review prior to that time; for the other two contractors, the next most recent reviews had been completed more than 8 months earlier. HUD also requires field office staff to prepare an evaluation of a contractor’s performance every year in the month prior to the contract’s anniversary date, using a standard monitoring guide issued by headquarters. This annual evaluation is ultimately used to make decisions on contract extensions and, if necessary, to act on inadequate performance. However, we found that these evaluations are not always conducted or are not always completed in time to provide useful information for contract renewal decisions. For example, Boston’s field office has evaluated the REAM contractor’s performance only once since the contract was awarded on June 30, 1995, and that evaluation was conducted several weeks after the contract had already been extended beyond the base year. Officials in HUD’s Boston field office told us that performance evaluations were not performed because they did not have the staff resources or travel funds to visit the contractor’s office, located about 37 miles from HUD’s field office. In the one evaluation conducted, HUD cited the contractor for sometimes failing to meet contractual time requirements for removing debris from properties. Furthermore, contrary to HUD’s guidance, Boston field office staff did not send the contractor a copy of the assessment report. According to a Boston HUD official, the staff simply neglected to send the results to the REAM contractor. As illustrated in figure 1, our August 1997 inspection of 24 Massachusetts properties revealed that the debris removal problem still exists. We found that 17 of the 24 properties contained either interior or exterior debris that had not been removed within the contractual time frame; consequently, prospective buyers were sometimes viewing properties littered with household trash, personal belongings, and other debris. Our work in HUD’s Fort Worth and Chicago offices also found instances of contracts’ being renewed without a current evaluation of the REAM contractor’s performance to justify the extension. In the Fort Worth office, the evaluation of the contractor was conducted in August 1997, after the REAM contract had been extended in July 1997. As a result of the August 1997 evaluation, HUD staff increased the risk of nonperformance associated with the contractor from low to moderate. According to a HUD official in Fort Worth, the office did not complete the required annual evaluation of the contractor before extending the contract because HUD headquarters had limited the field office’s travel funds. Also, at the time of our review, the Chicago office had extended 21 of its 23 REAM contracts without having evaluated them with headquarters’ standard monitoring guide in the month prior to their extensions. For example, the Chicago office extended the contract for one of its contractors in March 1997 but did not conduct the annual evaluation until July 1997. At that time, Chicago staff rated the contractor as being high-risk. Had the evaluation been completed earlier, HUD would have been in a better position to determine whether or not the contract should have been extended. As in Boston and Fort Worth, HUD officials in the Chicago office attributed their untimely evaluations to resource constraints. In addition to the risk-based monitoring and the performance evaluations, HUD property disposition staff in the field offices are required to maintain a file containing any correspondence between HUD and a REAM contractor. This file should contain any instructions given to the contractor, including oral instructions; documentation of any contractor monitoring conducted; and any other documentation that reflects the contractor’s performance. However, we found that the Boston field office does not maintain a REAM file. Boston officials told us that they did not need a separate REAM file because they did not have any performance-related correspondence with the contractor. However, an internal memorandum maintained by HUD’s contracting office indicated significant problems with late or missing inspection reports shortly after the contract was awarded. In addition, immediately following our August 1997 site inspection of properties, HUD and the REAM contractor corresponded about the deficiencies in property conditions we identified. Without fully documented files on contractors’ performance, HUD may have difficulty supporting contract extension decisions and acting on inadequate performance. As a result of our review, the Boston field office established a REAM file that contains information related to the contractor’s performance. Although the Chicago field office maintains files for each of its REAM contracts, we found that the files did not always include the documentation necessary to support that staff had been completing monitoring requirements as directed by HUD’s guidance. For example, on-site monitoring reviews for four of the nine REAM contractors, accounting for 12 of the 23 contracts, were not contained in the REAM files maintained by the Real Estate Owned Branch. Rather, they were provided to us by Real Estate Owned staff subsequent to our file review. For two of the REAM contractors that had multiple contracts with HUD, the reviews provided by Real Estate Owned staff pertained to only one contract area for each of these two contractors; there was no evidence that Real Estate Owned staff had conducted on-site office reviews relating to the contractors’ activities in their other contract areas. Like the Chicago office, the Fort Worth field office maintains a REAM file, but the file did not contain all of the information required by HUD’s guidance. For example, the file did not contain time frames for correcting performance deficiencies. If time frames are not properly documented, it may be difficult for HUD to take appropriate actions to ensure that the deficiencies are corrected. HUD’s guidance does not require field office staff to physically inspect properties managed by REAM contractors. However, HUD recognizes that physical inspections are the best method for monitoring the contractors’ work, and HUD’s guidance suggests that field office staff conduct monthly physical inspections of at least 10 percent of the properties assigned to each contractor in each stage of processing. The guidance also allows the field offices to use contractors, such as fee inspectors and REAM contract monitors, for property inspection services. The guidance suggests increasing the number of physical inspections, as necessary, for high-risk REAM contractors or contractors whose performance is deemed to be unsatisfactory. In addition, the guidance requires that HUD staff prepare a monthly log to reflect the inspections made by field office staff, fee inspectors, or REAM contract monitors in the previous month. Without adequate on-site inspections, HUD cannot be assured that it is receiving the services for which it has paid. On the basis of our review of approximately 50 property files in each location, we found that Boston field office staff had not inspected any properties in their inventory. Boston field office staff told us they do not get out to inspect properties because they do not have the travel funds or staff resources to do so. The Boston field office’s lack of property inspections and inadequate staffing resources were also discussed in a June 1996 HUD IG report that prompted field staff to conduct such inspections in the fall of 1996. However, the Single-Family Housing Director in the Boston office noted that in the middle of fiscal year 1997, the Single-Family Housing Division was forced to combine its asset management and Real Estate Owned functions to accommodate a 50-percent reduction in staff, leaving a small, inexperienced staff to manage the inventory. According to the Boston office Single-Family Housing Director, the staff focused on meeting HUD’s management plan’s goal of selling properties, which did not leave enough resources to conduct property inspections. Therefore, in April 1997 the Single-Family Housing Division contacted HUD’s Administrative Service Center to solicit proposals for contracting out property inspection services. Subsequent to our visit, in December 1997, the Boston field office started using contractors to make property inspections. Chicago officials reported that they inspect 10 percent of their entire inventory every month. However, it was difficult for us to verify that this many inspections are completed because Chicago staff neither file inspection reports in a separate property inspection file for each REAM contractor nor prepare the required report documenting inspections made by field office staff each month. While staff in the Fort Worth office did not prepare the required report of monthly inspections either, the Real Estate Owned Property Management Supervisor in that office maintains a file containing reports on each of the monthly inspections. According to a HUD official in Fort Worth, the staff have a performance standard requiring them or a fee inspector to inspect a minimum of 10 percent of the properties in their individual inventories. For the Chicago and Fort Worth offices, our review of 50 property files maintained by each location indicated that field office staff had, at some time, inspected approximately 22 percent and 10 percent, respectively, of those properties. The REAM contractor’s submission of initial and routine inspection reports is essential for HUD to determine its marketing strategy for the properties and to mitigate potential losses on the properties. For example, the initial inspection reports, along with appraisals, are the primary tools for determining what repairs must be made and whether a property meets certain standards that would allow it to be sold with Federal Housing Administration-insured financing. HUD’s guidance requires a REAM contractor to submit initial inspection reports to the field office within 5 working days of being notified that a property has been assigned, but there is no specific guidance on the submission of routine inspection reports. We found considerable differences among the three field offices we reviewed both in terms of the requirements they placed on REAM contractors for submitting inspection reports and the extent to which the reports were actually submitted to the field offices. For example, the Boston field office has not placed a contractual requirement on its REAM contractor for when initial inspection reports must be submitted to the field office. Of the 42 property files we reviewed in Boston, 18 (43 percent) did not have an initial inspection report. The Chicago field office requires REAM contractors to submit initial inspection reports within 10 calendar days of the assignment of properties to the contractors, but 20 percent of the files that we reviewed in Chicago did not have an initial inspection report. The Fort Worth field office requires REAM contractors to submit initial inspection reports within 10 working days of the notification that a property has been assigned, and all of the property files that we reviewed in Fort Worth contained an initial inspection report. The three field offices we visited also had varying requirements for the submission of routine inspection reports and oftentimes did not know if the routine inspections had been conducted as required. The Massachusetts REAM contract requires that the contractor perform and document routine inspections every 30 days after the initial inspection. Although the contract does not specifically require the contractor to send the inspection reports to HUD, field office staff expect the contractor to submit the inspection reports. According to the contractor, it strives to submit routine inspection reports to HUD no later than 5 days into the month after they are performed. However, of the 31 files we reviewed in Boston for properties that had been in inventory long enough to have received a routine monthly inspection, 17 (55 percent) did not contain the required monthly inspection reports. Furthermore, inspection reports that were in the files were not always complete, including some which stated that a property had problems or damage but did not describe what it was. The Chicago office requires the contractor to inspect properties every 10 calendar days but to submit only those routine inspection reports that contain negative findings. The Fort Worth office requires contractors to inspect properties on a biweekly schedule but does not require them to submit the inspection reports at all—the routine inspection reports are maintained by the contractors. Since neither the Chicago nor the Fort Worth field office requires the contractors to submit all routine inspection reports, HUD is unable to readily determine whether the contractors are conducting inspections as required. We found instances in all three locations of properties that were not maintained as required by the REAM contracts. During our inspection of approximately 20 properties in each location, we identified properties that (1) were not properly secured, (2) had physical conditions that did not match those that the REAM contractor had reported to HUD, (3) were not properly identified as HUD homes, or (4) had imminent hazards. For instance, of the 66 properties we visited in all three locations, we found that 26, or approximately 39 percent, were not sufficiently secured so as to prevent access. Failure to properly secure properties can lead to trespassing, vandalism, and properties’ deterioration. For example, in Massachusetts three of the eight unsecured properties had exposed walls in the bathrooms where copper piping had been ripped out, and seven had broken windows; three properties had graffiti, and one contained a syringe. Figure 2 illustrates vandalism at one unsecured property in Massachusetts. In addition, we found that one Massachusetts property had been poorly secured by nailing a large piece of plywood to the door, which prevented the door from closing, and then propping a thin piece of wood against the door from the inside, effectively leaving the house wide open. Moreover, two of the Massachusetts properties were inaccessible to us and the REAM contractor because in one, the locks had been changed and in the other, someone had nailed the door shut. Both conditions were noted in the contractor’s inspection reports prior to our visit. In addition, we found physical conditions that did not match those that the REAM contractor had reported to the three HUD field offices. Some of the examples we found included (1) a property containing personal possessions, animal feces, and fur, while the contractor’s inspection report indicated that the house was free of debris; (2) a property that had roof leaks and extensive water damage, although the contractor had certified to HUD that the roof had been repaired; (3) a contractor’s inspection report claiming that a property had extensive defective paint surfaces that would cost $2,000 to treat, although the property had almost no painted surfaces because the exterior was aluminum siding and the interior was primarily paneling and tile; (4) a property that had suffered a major fire, although the inspection report did not indicate the problem; and (5) a property that had both extensive water damage in several rooms, some of which apparently resulted from a broken skylight secured by taping a plastic trash bag over it, and bathroom walls that were torn apart by vandals to obtain valuable copper piping, none of which was reported to HUD. Figure 3 illustrates the conditions at two of these properties. If contractors do not accurately report on the condition of properties, HUD may lack vital information on which to make disposition decisions and to address safety hazards. As a result, the government may sell properties for less than they are worth or incur unnecessary holding and maintenance costs because the properties are not marketable. Furthermore, we found that about 38 percent of the properties we visited in Massachusetts had either no HUD signs or signs that were difficult to read. The REAM contract requires contractors to post HUD signs on properties in a conspicuous location. Failure to post appropriate signs can make it difficult for neighbors to determine whom to contact when problems concerning a HUD-owned property arise. We also found that almost 71 percent of the properties we visited in Massachusetts and about 37 percent in Illinois contained imminent hazards. Failure to address imminent hazards endangers would-be buyers as well as neighbors and puts the government at risk of litigation. As illustrated in figure 4, hazards that we observed included broken or rotting stairs, a refrigerator on a back porch with the door intact, a broken cellar bulkhead door, household waste, food and soiled diapers, and numerous properties with paint and solvents in the basement that had not been removed by the contractor. In some cases, the problems that we saw at these properties had been reported to HUD by the contractor, but HUD did not act promptly to address them. The files and properties that we reviewed in Illinois and Texas did not reveal contractor-reported conditions to which HUD had not responded. However, in Massachusetts, we found four instances in which HUD had not acted on problems. In two cases, inspection reports submitted to HUD noted that the front steps to the properties were dangerous, a condition warranting immediate repair by the contractor. Nonetheless, when we inspected the properties about 3 months after the contractor initially reported the problems, the stairs still had not been repaired. We also found the initial inspection report for a Massachusetts property conveyed to HUD in May 1997 which indicated that the property had suffered heavy water damage as the result of frozen pipes, yet the insurance form from the lender reported that the property was conveyed undamaged. Although these documents were in the property file, according to Boston’s Single-Family Housing Director, the property would have been reconveyed to the lender if the HUD staff had been cognizant of the property’s condition before it went under sales agreement. We recognize that some of the problems we found may have occurred after a contractor’s last routine inspection. However, we believe that it is unlikely that all of them could have occurred during the time between inspections. In fact, in one instance, a routine inspection report completed by the contractor for one of the Illinois properties indicated that all of the exterior doors were secure, the interior was free of debris, and no emergency repairs were needed. However, we had inspected the property on the previous day and found that it had hazardous stairs, debris in the basement, and an unsecured cellar door through which the entire house was accessible. Also, we found in our review of files and properties in the three locations that the properties were generally in better condition in the locations that monitored the contractors’ performance. For example, in HUD’s Fort Worth office, where field office staff generally perform oversight as suggested by HUD’s guidance, the properties that we visited had few deficiencies. In contrast, in Boston, where many of the key oversight functions were not conducted properly, the general condition of the properties was far worse than that of the properties managed by the Chicago and Fort Worth field offices. We recognize, however, that the condition of the properties is not totally attributable to HUD’s oversight of the contractors. Other factors can contribute to the condition of the properties, including the overall quality of the contractors’ work and the susceptibility of the neighborhood to crime and vandalism. Also among the properties that HUD’s field offices assign to REAM contractors are those in custodial status. A custodial property is a vacant or abandoned property secured by a HUD-held mortgage; HUD takes possession of such properties for the sole purpose of preserving and protecting them until HUD acquires title. REAM contractors receive a monthly fee for each custodial property assigned to them for preservation and protection. Because HUD does not yet own properties that are in custodial status, the contractors are not required to perform all of the services that they must perform on other properties in HUD’s inventory. As with other properties in HUD’s inventory, the responsibilities of a REAM contractor with respect to custodial properties are generally governed by the individual REAM contract. However, HUD’s guidance requires contractors to inspect custodial properties, post a HUD warning sign within 48 hours after being assigned the properties, and initiate action to remove imminent hazards from custodial properties no later than 24 hours after discovering them, although contractors may not remove any personal property. We conducted a limited review of custodial properties in Illinois because it has a high number of properties in custodial status; of the 61 field offices with custodial properties in inventory as of December 1997, the Chicago field office had 167 custodials, or 9 percent of the total inventory of custodials, more than any other field office. Under the Illinois contract format, REAM contractors are required to perform services at custodial properties such as securing them, completing initial inspections within 10 days of being assigned custodial properties, and conducting routine inspections every 10 days thereafter. If damage is discovered during the initial inspection of a custodial property, the REAM contractor is required to provide photographs of the damage and submit them with the inspection report to HUD. For routine inspections, the contractor is to submit a copy of each negative inspection report to HUD within 24 hours after the inspection is performed, including a narrative description of any damage or condition that could create a health hazard. Our review of nine custodial properties in Illinois revealed that six had been in that status for at least 3 years. We visited these nine properties and found six of them to have seriously deteriorated and/or hazardous conditions. However, for five of these six cases, we found no evidence in the Real Estate Owned files that the contractor responsible for preserving and protecting the properties had notified HUD of their condition, as required by the contract, nor any evidence that a HUD Realty Specialist maintained a file of current information about the properties. For example, one of the properties we visited was completely burned out and too dangerous to enter, but the only inspection report in the Real Estate Owned file for this property was dated in 1994 and did not note any major structural or fire damage. The Real Estate Owned Branch’s file for another of the properties, which had significant water damage, contained no inspection reports and no documentation to show that HUD was aware of the property’s condition. Inside another property, we found the potential health hazard of dead and rotting pigeons along with bird droppings. The most recent inspection report in the Real Estate Owned Branch’s files on this property was dated in 1995. Another property had old meat and dead maggots in the refrigerator; the most recent inspection report in the Real Estate Owned file for this property was dated over 5 months earlier and did not identify the potential health hazard of the spoiling food. Figure 5 illustrates the conditions at two of these properties. HUD is in the process of changing its handling and disposition of single-family properties. These changes are motivated primarily by HUD’s larger effort to downsize the agency and to substantially reform management practices agencywide. HUD envisions that these changes, when implemented, will limit the need for REAM contractors’ services. Nevertheless, it appears that HUD’s property disposition operations will continue to rely on contractors’ services to some extent for the foreseeable future. In addition, there is still uncertainty about how HUD will implement some of the reforms it is planning and the extent to which the reforms will produce a feasible and effective alternative for achieving the goals of HUD’s property disposition process. HUD has been considering changes to its property disposition process as a part of its broader effort to fundamentally revise the agency’s organization and management under the HUD 2020 Management Reform Plan. An integral part of the 2020 Plan is the downsizing of HUD’s workforce from approximately 10,500 to 7,500 employees by the year 2002. Many of these staff reductions will come from single-family housing operations, including Real Estate Owned functions. According to HUD’s Single-Family Property Disposition Director, as of December 1997, approximately 475 staff members were supporting Real Estate Owned operations, but by the year 2002, this number will have been reduced to 66 employees. In addition to downsizing, the 2020 Management Reform Plan also identifies and seeks to address flaws in HUD’s current structure for single-family housing operations, including poorly controlled and monitored disposition of properties. As a part of the solution under the 2020 Plan, HUD is consolidating all single-family housing operations from 81 locations across the nation into four single-family homeownership centers (HOC). The HOCs will carry out the work traditionally performed in HUD’s field offices, including oversight and management of contractors and sales of remaining inventory. According to Single-Family Property Disposition officials, the 66 staff devoted to Real Estate Owned operations under the 2020 Plan will be located in these HOCs. According to these officials, as of December 1997, some single-family housing functions had been transferred to some of the HOC locations, but the transition was still in process and no target date had been set for completing the consolidation. However, these officials said that a Real Estate Owned presence will be maintained in HUD’s field offices as long as necessary to carry out property disposition functions, up until the year 2002, when the Real Estate Owned portion of the downsizing plan is expected to be complete. This presence will be made up of staff who are not among the 66 assigned to Real Estate Owned positions at the HOCs and who choose to remain in their current positions while the changes to property disposition are being implemented. As part of its restructuring of single-family housing operations, HUD is also considering alternative methods for disposing of the Real Estate Owned inventory. According to Single-Family Property Disposition officials, the pursuit of alternative methods is motivated primarily by the significant decrease in Real Estate Owned staff resources, as well as by the increased number of properties in HUD’s inventory. In a June 1997 advanced notice of proposed rulemaking in the Federal Register, HUD stated its intent to develop innovative methods for disposing of HUD-owned single-family properties. Specifically, according to the Deputy Assistant Secretary for Single-Family Housing, the Department plans to sell the rights to properties before they enter inventory, thus enabling them to be quickly disposed of once they become available. According to the Single-Family Property Disposition Director, as a result of these sales, HUD anticipates having only a minimal inventory of properties in the future and, therefore, only a limited need for REAM contractors’ services. In September 1997, HUD issued a request for proposals soliciting a financial adviser to help design a specific structure for these sales, which HUD refers to as “privatization sales.” Although the details of the privatization sales concept remain to be developed by the financial adviser, Single-Family Property Disposition officials envision that properties would be pooled on a regional basis and purchased by entities that could use their existing structures to sell the properties in the same way that HUD currently does, through competitive sales to individuals. Rather than taking possession of a large number of properties at one time, buyers would receive a “pipeline” of newly acquired properties as they come into inventory, at a rate of about 3 or 4 per day. While HUD further develops the privatization sales concept, staff reductions and the transfer of functions to the HOCs are already in progress. According to Single-Family Property Disposition officials, field office staff are still responsible for managing and disposing of the existing inventory of properties, which numbered about 30,000 as of September 1997. According to these officials, until the privatization sales program is successfully implemented, Real Estate Owned staff will be responsible for disposing of the current inventory and any new properties coming into the inventory by using property management and marketing contracts similar to those issued under a recent pilot program, which tests the approach of contracting out all property management and marketing services. Furthermore, even after the privatization sales approach is implemented, there will likely continue to be a relatively small number of properties that HUD does not dispose of through privatization sales. For instance, HUD is considering retaining a percentage of foreclosed properties in inventory to sell to nonprofits and state or local governments. Such properties would be managed and disposed of using contracts similar to those used in the pilot. Under the pilot contracts, a contractor performs both the marketing functions traditionally carried out by HUD staff and the property management functions traditionally obtained through REAM contracts. Although the pilot program allows many of the tasks traditionally performed by HUD staff to be carried out by a contractor, according to a HUD official in one of the pilot locations, Real Estate Owned staff must still monitor the contractor’s performance. According to Single-Family Property Disposition officials, as operations are transferred to the four HOCs, these locations will be responsible for obtaining contracts similar to those under the pilot and for overseeing those contracts. The headquarters Single-Family Property Disposition Division is recommending to the HOCs that they acquire services similar to the pilot program to supplement existing staff in field offices with few remaining Real Estate Owned employees. HOCs would have the option of choosing which specific services to obtain under contract, depending on the needs of the field offices in their jurisdictions. Although the HOCs will have ultimate responsibility for overseeing property disposition contractors, staff will likely be designated in the field offices to monitor the contractors and report to the HOCs, to the extent that any Real Estate Owned staff remain in the field office locations. While HUD deserves credit for seeking improvements to its single-family property disposition process, it is not yet clear precisely how the reforms that HUD is pursuing will take shape and to what extent, if at all, they will be better than the existing process at meeting HUD’s property disposition goals of ensuring the highest return to the government on acquired properties, promoting homeownership, and strengthening communities. Furthermore, if the reforms do not work as HUD envisions, the Department will have a difficult time reverting to its current property disposition approach because the downsizing and consolidation of single-family operations is already under way. As discussed above, the details of HUD’s plans to carry out privatization sales have not yet been formulated. As a result, it is difficult to assess the impact of the reforms on HUD’s property disposition goals. According to HUD staff, the reforms can improve on the current process by reducing HUD’s property disposition costs. They said that this view is supported by a study prepared in September 1997 by Hamilton Securities Advisory Services, Inc., a former HUD contractor. This study analyzed the costs of the current property disposition system and identified several alternatives to the current system. The study noted that although the revenue that HUD obtains on sales of single-family properties has been similar to housing industry standards, its property disposition costs have been “a little higher.” Accordingly, the study evaluated options, such as bulk sales of properties or awarding the right to sell properties to contractors, that could allow HUD to lower its property disposition costs and associated administrative costs. However, as the study noted, to the extent that alternative approaches result in lower returns to HUD because of purchasers’ increased risk and financing costs, savings in property disposition costs could be offset to some degree. Considering these factors, the study projected that bulk sales of properties or awarding the right to sell properties to contractors could achieve annual savings of $43 million or $183 million, respectively, over HUD’s current property disposition process. The study did not assess the potential effects of the reform options on HUD’s ability to promote homeownership or strengthen communities through the single-family property disposition process. Another uncertainty about HUD’s revised process is that it may take longer than anticipated to complete the transition to the privatization sales approach. According to Single-Family Property Disposition officials, HUD expects to publish a proposed rule amending the current property disposition regulations in about March 1998, have a financial adviser hired by April 1998, conduct the first privatization sale in the summer of 1998, and publish the final rule amending the current regulations by September 1998. The first sale would offer the rights to properties that HUD will acquire in fiscal year 1999. According to these officials, if the sale is national in scope, then new properties would stop coming into HUD’s inventory at the end of fiscal year 1998, and about 6 months into fiscal year 1999, only a relatively small inventory would remain. However, most of the details of the privatization sale concept have yet to be determined; for example, HUD does not yet know who will be the potential purchasers for these sales, or the scope of the first sale. Even if the first sale is on a national basis and HUD is able to meet its target dates, a sizable inventory of properties will continue to need management and marketing services until at least the middle of fiscal year 1999. If the first privatization sale is delayed, only partial in scope, or does not work according to plan, HUD’s sizable inventory will need property management and marketing services even farther into the future. In any case, contractors are likely to still be involved in the property disposition process to some degree for the foreseeable future, to assist the decreasing field office staff in handling the current inventory and any future inventory of properties not sold through privatization sales. Furthermore, if the privatization sale concept does not operate as well as hoped, according to Single-Family Property Disposition officials, HUD will rely heavily on contracts similar to those issued under the pilot. In light of this situation, it will continue to be important for HUD to ensure adequate controls over contractors’ activities. HUD’s single-family housing officials recognize that a system for monitoring contractors’ performance will be needed under the new approach; according to these officials, the function of the Real Estate Owned divisions within the HOCs will be almost exclusively to monitor contracts. Although these officials anticipate that a monitoring guide developed in connection with the three pilot contracts will be largely transferrable to the HOCs’ monitoring operations, as of February 1998, they had not yet developed specific guidance for the HOCs to use in their monitoring role. Because HUD headquarters has no mechanism for routinely monitoring field offices’ oversight of REAM contractors, it has no assurance that its field offices are consistently and effectively applying HUD’s guidance for overseeing contractors’ performance. Although HUD’s guidance suggests and, in some instances, requires various methods for monitoring REAM contractors’ performance, such as conducting monthly on-site property inspections, maintaining files on contractors’ performance, and providing contractors with written results of performance evaluations, for the three field offices we reviewed, we found that these activities have not consistently been used in a way that assures HUD that REAM contractors are meeting their contractual obligations. As a result, field offices may extend contracts without current information on the quality of the REAM contractors’ past performance; do not consistently receive the timely information they need to make informed marketing decisions for the properties in inventory; and may compensate contractors for services that were not provided in accordance with contract requirements. In addition, we believe that oversight weaknesses at the three locations we visited have contributed to poor conditions at some of the properties in HUD’s inventory, including custodial properties, potentially decreasing the value of these properties and negatively affecting the surrounding neighborhoods. Although this is a transitional period for HUD’s Single-Family Property Disposition operations, with major changes being planned and implemented, there will continue to be a need for contractors to perform property management and/or marketing services into the foreseeable future. Furthermore, whereas property management services currently obtained under both the pilot and REAM contracts are overseen by field offices located in the same general area for which the contractors have responsibility, HUD staff in the future will be responsible for monitoring contractors’ activities throughout the nation from only four locations. Because of this situation, it will, if anything, be even more critical for HUD to ensure that it has effective systems in place to oversee property disposition contractors’ activities. As Single-Family Property Disposition officials have acknowledged, the uncertainty about the potential impacts of privatization sales on HUD’s property disposition goals will require HUD to carefully monitor the effects of the new process as it is implemented and assess these effects in relation to the results under other possible alternatives, such as, for example, issuing management and marketing contracts similar to those under the pilot program. It will be difficult for HUD to revert back to its current property disposition approach if its planned reforms do not work as HUD envisions because of the substantial downsizing and consolidation of operations that is already under way. We recommend that, so long as contractors are involved in providing asset management services for properties in HUD’s single-family inventory, the Secretary of Housing and Urban Development establish a process for monitoring the administration of such contracts at field offices and homeownership centers. This process should include controls sufficient to ensure that these field locations consistently implement HUD’s guidance and effectively oversee contractors’ performance. Specifically, these controls should require that (1) field locations complete performance evaluations of contractors (using the standard monitoring guide in HUD’s Property Disposition Handbook) prior to renewing contracts and communicate the results of these evaluations to the contractors in writing in a timely manner; (2) field location program offices maintain files on contractors’ performance; (3) HUD staff or contractors hired to perform monitoring duties conduct monthly on-site inspections of a sample of properties in inventory; (4) contracts contain clear and consistent requirements on when contractors’ routine inspection reports must be submitted to HUD for review; (5) HUD staff ensure that real estate asset management contractors notify HUD of deteriorated or hazardous conditions at custodial properties; and (6) HUD headquarters obtain sufficient information to monitor homeownership centers’ and field offices’ administration of the contracts. We provided a draft copy of this report to HUD for its review and comment. HUD’s Acting General Deputy Assistant Secretary for Housing told us that HUD takes the findings in our report very seriously and will take steps to ensure that properties identified as hazardous to buyers and neighborhood residents will be made safe. He noted that our report was based on a review of properties in three locations, a small sampling of the approximately 30,000 homes in HUD’s inventory. Nevertheless, the Department is reviewing its internal procedures for managing REAM contracts under the new homeownership centers to identify immediate corrective steps and to ensure that any management weaknesses that existed under the previous field structure will not recur in the new organization. As discussed in our report, the Department is also currently assessing alternative contract vehicles and other initiatives for managing and disposing of its inventory. HUD believes that these changes should strengthen management control over the property management and disposition process, result in a substantially reduced real estate inventory, and limit the use of REAM contracts. We conducted our review from July 1997 through February 1998 in accordance with generally accepted government auditing standards. (See app. I for a discussion of our scope and methodology, including the statistical methodology we used for evaluating oversight of REAM contracts.) As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Housing and Urban Development. We will make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report are listed in appendix II. As requested by the Chairman, Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services; the Chairman, Subcommittee on VA, HUD, and Independent Agencies, Senate Committee on Appropriations; and the Chairman, Subcommittee on Financial Institutions and Regulatory Relief, Senate Committee on Banking, Housing and Urban Affairs,we evaluated (1) whether the Department of Housing and Urban Development (HUD) is ensuring that real estate asset management (REAM) contractors meet their contractual obligations and (2) what actions HUD has planned or under way to change its handling and disposition of the single-family properties in its inventory. We obtained most of the information used to determine whether HUD is ensuring that REAM contractors meet their contractual obligations from HUD field offices since they are responsible for administering REAM contracts. Specifically, we performed audit work at the Massachusetts, Texas, and Illinois state offices. We selected Massachusetts because of past problems it has experienced with oversight of REAM contracts. We chose Texas and Illinois on the basis of their geographic locations and relatively large inventories of single-family properties (nationally ranking third and fourth, respectively, in the number of properties). To obtain information on HUD’s policies and procedures for monitoring REAM contracts, we reviewed the HUD Property Disposition Handbook and other relevant documentation. We discussed the implementation of these policies and procedures with single-family housing officials in both headquarters and the three field offices we visited. We also interviewed Administrative Service Center officials, who are responsible for awarding contracts, about contract administration issues. In the selected field offices, we reviewed property files and REAM contract files maintained by the Single-Family Housing Real Estate Owned Branch and other documentation related to oversight of contractors’ performance. We gathered information on oversight of REAM contracts by using an automated data collection tool to compile standardized information from the single-family property files. The sampling methodology we used to select case files for review and an explanation of the statistical precision of the samples we used is described below. To determine how well REAM contractors’ services were being provided, we inspected approximately 20 properties in each location. Using HUD’s inventory listing and information from property inspection reports, for each field office we judgmentally chose two geographically dispersed clusters of properties for inspection. One group of properties was located relatively close to the HUD office, while the second group was located several hours away from the office. In addition, we made site visits to REAM contractors’ offices in each field location to review their property and subcontractor files. We also discussed contract obligations and contractors’ policies and procedures with REAM representatives. To identify what actions HUD has planned or under way to change its handling and disposition of the single-family properties in its inventory, we gathered information on HUD’s planned and ongoing efforts from HUD documents and discussions with the Director, Single-Family Property Disposition, other single-family housing officials, and HUD’s Office of Inspector General. This section describes the sampling methodology and statistical precision of the estimates we used in our review of single-family property files. To review documentation on oversight of REAM contracts, we used an automated data collection tool to compile standardized information from a sample of single-family property files at HUD’s Illinois, Massachusetts, and Texas state offices. The data collected included dates of property assignments to REAM contractors and dates of property inspections by the contractors, HUD staff, or someone hired by HUD to conduct the inspections. We obtained a property inventory from the Single-Family Accounting Management System for the HUD field office in each location to identify the universe of properties listed for sale. Table I.1 displays the total inventory and properties listed for sale in each location. On the basis of the total number of properties listed for sale and the amount of time needed to review individual property files, we decided to review a minimum of 50 randomly selected files in each location. Although as of July 28, 1997, the Massachusetts State Office’s inventory listing showed 57 properties listed for sale, 15 of them were under sales agreements as we conducted our property file review in July and August 1997. Therefore, we reviewed only 42 property files in the Massachusetts State Office because the files for the properties under agreement had been sent to a closing attorney and were unavailable to us. Since we used a sample (called a probability sample) of property files to develop our estimates from the automated data collection instruments, each estimate has a measurable precision, or sampling error, which may be expressed as a plus/minus figure. A sampling error indicates how closely we can reproduce from a sample the results that we would obtain if we were to take a complete count of the universe using the same measurement methods. By adding the sampling error to and subtracting it from the estimate, we can develop upper and lower bounds for each estimate. This range is called a confidence interval. Sampling errors and confidence intervals are stated at a certain confidence level—in this case, 95 percent. For example, a confidence interval at the 95 percent confidence level means that in 95 out of 100 instances, the sampling procedure we used would produce a confidence interval containing the universe value we are estimating. Table I.2 provides the estimates and confidence intervals from single-family property file reviews in the Illinois and Texas state offices. Since we reviewed the files for all the properties listed for sale in the Massachusetts State Office, there were no sampling errors. Description (as of our review date) Average no. of days between property assignment to initial inspection Missing initial inspection reports(percent) Average no. of days between initial inspection completed and report received by HUD Number of property files with inspection reports from REAM monitors (percent) Number of property files with inspection reports from HUD Realty Specialists (percent) John T. McGrail The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) whether the Department of Housing and Urban Development (HUD) is ensuring that real estate asset management contractors meet their contractual obligations; and (2) what actions HUD has planned or under way to change its handling and disposition of the single-family properties in inventory. GAO noted that: (1) HUD does not have an adequate system in place to assess oversight of real estate asset management contractors, and the three HUD field offices that GAO visited varied greatly in their efforts to monitor these contractors' performance; (2) none of the offices adequately performed all of the functions needed to ensure that the contractors meet their contractual obligations to maintain and protect HUD-owned properties; (3) GAO's physical inspection of properties for which the contractors in each location were responsible identified serious problems, including vandalism, maintenance problems, and safety hazards; (4) these included such things as broken windows, graffiti, leaking roofs, and broken steps; (5) these conditions may decrease the marketability of HUD's properties; decrease the value of surrounding homes; increase HUD's holding costs; and, in some cases, threaten the health and safety of neighbors and potential buyers; (6) in connection with HUD's plans to reduce staff by about 29 percent by the year 2002, HUD's single-family property disposition operations, including the real estate asset management function, are in a period of transition; (7) these changes are closely linked to HUD's agencywide 2020 Management Reform Plan; (8) they include: (a) a reduction in property disposition staff and the consolidation of all field offices' single-family housing operations into four homeownership centers; (b) plans to sell the rights to properties before they are assigned to HUD's property disposition inventory so that they can be quickly disposed of once they become available; and (c) to some degree, the use of contracts similar to a pilot program started in September 1996 to test the approach of contracting out all marketing and management functions associated with acquired properties; and (9) while HUD envisions that these changes will eventually limit the need for real estate asset management contractors' services, there will continue to be properties in need of such services for the foreseeable future, even if on a smaller scale.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In addition to its central office located in Washington, D.C., VA has field offices located throughout the United States, as well as the U.S. territories and the Philippines. The department’s three major components—VHA, the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)— are primarily responsible for carrying out its mission. More specifically, VHA provides health care services, including primary care and specialized care, and it performs research and development to improve veterans’ needs. VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. Lastly, NCA provides burial and memorial benefits to veterans and their families. The use of IT is critically important to VA’s efforts to provide benefits and services to veterans. As such, the department relies extensively on IT to meet the day-to-day operational needs of its medical centers, provide veteran-facing systems, and otherwise support the department’s mission. According to OI&T data as of October 2016, there were 576 active or in- development systems in VA’s inventory of IT systems. These systems are intended to be used for the determination of benefits, benefits claims processing, and access to health records, among other services. VHA is the parent organization for 319 of these systems. Of the 319 systems, 244 were considered mission-related and provide capabilities related to veteran health care delivery. VHA’s systems provide, for example, capabilities to support electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings, as well as patient admission to hospitals and clinics, and patient care through telehealth. The remaining systems support corporate or non-mission related IT functions. For fiscal year 2017, the department’s budget request included nearly $4.28 billion for IT. Specifically, VA requested approximately $2.53 billion for sustainment, approximately $1.27 billion for payroll and administration, and approximately $471 million for new systems development or modernization efforts. According to OI&T, of the $471 million requested for VA development and modernization, approximately $166.6 million (about 35 percent) was requested to support VHA development projects such as the Veterans Health Information Systems and Technology Architecture, known as VistA Evolution, and other clinical systems development. In addition, $276.7 million (about 11 percent) of the $2.53 billion in sustainment funding was allocated to VHA-specific projects to support existing systems. The remaining amounts of requested funds support the other VA administrations as well as overall IT infrastructure that are not necessarily aligned to any single administration. Figure 1 provides the breakdown of VA’s proposed IT budget for fiscal year 2017. Since 2007, VA has been operating a centralized organization in which most key functions intended for effective management of IT are performed by OI&T and led by the Assistant Secretary for Information and Technology/Chief Information Officer (CIO). Figure 2 presents a simplified organizational chart for VA. OI&T has responsibility for managing the majority of VA’s IT-related functions. The office provides strategy and technical direction, guidance, and policy related to how IT resources are to be acquired and managed for the department. According to VA, OI&T’s mission is to collaborate with its business partners (such as VHA) and provide a seamless, unified veteran experience through the delivery of state-of-the-art technology. The CIO serves as the head of OI&T and is responsible for providing leadership for the department’s IT activities. The CIO reports to the Office of the Secretary of Veterans Affairs through the Deputy Secretary and advises the Secretary regarding the execution of the IT appropriation. In addition, the CIO is expected to serve as the principal advisor to top management officials, such as the Under Secretaries of each of the three administrations, on matters relating to IT management in the department. This official is also tasked with reviewing and approving investments, as well as overseeing the performance of IT programs and evaluating them to determine whether to continue, modify, or terminate them. Although VA centralized its key IT functions in order to maintain better control over resources, we have previously reported that the office has faced challenges in fully implementing and managing IT under its centralized organizational structure. In addition, independent assessments of the department’s efforts in 2013 and 2015 showed that OI&T has had difficulty in preventing IT activities from occurring outside its control. According to the assessments, it has also been challenged in effectively collaborating with the department’s various business units and in efficiently and cost-effectively delivering new IT capabilities. Recognizing these challenges, the CIO initiated an effort in January 2016 to transform OI&T focus and functions. Among other things, the transformation focused on reorganizing the units within OI&T. Beginning in April 2016, VA established five organizational units within OI&T with responsibility for performing and managing specific IT-related functions. Enterprise Program Management Office. This office began initial operations in April 2016, and is intended to serve as OI&T’s portfolio management and project tracking organization. According to OI&T, its goals are to align IT portfolios with the department’s strategic objectives; enhance visibility and governance; analyze and report on portfolio performance metrics; ensure the overall health of the IT portfolio; and optimize resources for projects, people, and timelines. The Enterprise Program Management Office includes the following six functional areas: (1) Intake and Analysis of Alternatives is to work with the VA administrations and other staff offices to develop requirements to meet the needs of veterans, provide analysis of alternative approaches to meeting those requirements, and integrate information security; (2) IT Portfolios is to consolidate programs and projects under five portfolios (Health, Benefits, Cemeteries, Corporate, and Enterprise services); (3) Project Special Forces is to mitigate issues that put projects at risk of failure; (4) Demand Management is responsible for metrics gathering and analysis, development of process tools, human resources, and training; (5) Transition Release and Support is to manage OI&T’s integrated calendar supporting VA’s Veteran-focused Integration Process; and (6) Application Management is responsible for IT implementation efforts, including testing, design, and data management. Account Management. This function, led by four account managers, is responsible for managing the IT needs of OI&T’s business partners—VA’s administrations and staff offices, including VHA. Account managers are to interface directly with their customers to understand their needs, help identify and define the solutions to meet those needs, and represent their interests by reporting directly to the CIO. In this regard, account managers are to submit their customers’ IT requirements to the Enterprise Program Management Office, ensure that their business needs are understood by OI&T, and ensure that business solutions are designed to meet their customers’ specifications. This function is also tasked with advocating for the customers in the budget process. OI&T intends for this function to address the challenge of effectively collaborating with business units. As of December 2016, all four account managers were in place. Quality and Compliance. This function is responsible for establishing effective policy governance and standards and ensuring adherence to the policies and standards. In addition, the quality and compliance function is charged with identifying, monitoring, and measuring risks across OI&T. Data Management Organization. The organization is intended to improve both service delivery and the veteran experience by engaging with data stewards to ensure the accuracy and security of the information collected by VA. The organization is to institute a data governance strategy; engage with VA staff to ensure the accuracy and security of collected data; analyze data sources to form an enterprise data architecture; and establish metrics for data efficiency, access, and value. OI&T also intends for the organization to identify trends in the data collected on each veteran that could improve their health care by providing predictive care and anticipating needs. Strategic Sourcing. This function is responsible for establishing an approach to fulfilling the department’s requirements with vendors that provide solutions to those requirements, managing vendor selection, tracking vendor performance and contract deliverables, and sharing insights on new technologies and capabilities to improve the workforce knowledge base. The VA Under Secretary for Health is the head of VHA and is supported by the Principal Deputy Under Secretary for Health, four Deputy Under Secretaries for Health, and nine Assistant Deputy Under Secretaries for Health. Among these, the Deputy Under Secretary for Health for Policy and Services oversees the work of the Assistant Deputy Under Secretary for Health for the Office of Informatics and Information Governance within VHA. The Strategic Investment Management office, a division of the Office of Informatics and Information Governance, was established to support the IT needs of VHA by providing information on health-related information systems that senior managers need to make sound decisions. There are four organizational services within this office: Business Architecture, Investment Governance Services, Open Source Management, and Requirements Development and Management. Among other things, this office advocates for VHA’s IT needs within the Planning, Programming, Budgeting, and Execution process and coordinates with VHA business owners and other VA organizations to support, document, analyze, and evaluate clinical and business needs and requirements for IT development. The Strategic Investment Management office works closely with business owners and program offices within VHA to assist with the IT governance and budgeting processes, IT needs identification, requirements development, and investment oversight. For example, the Strategic Investment Management office works with program offices such as Pharmacy Benefits Management Services, Veterans Access to Care (scheduling and consults), and Community Care. These offices are responsible for key functions and IT systems related to health service delivery: Pharmacy Benefits Management Services. This program office is responsible for providing organizational guidance on a broad range of pharmacy activities to the 260 pharmacies located in VA’s medical centers and outpatient clinics. The office also has operational responsibility for all aspects of the department’s seven consolidated mail outpatient pharmacies, with the exception of IT. The Executive Director of this office is responsible for identifying functional needs for medical center pharmacies and consolidated mail outpatient pharmacies and communicating those needs to OI&T for prioritization and planning to acquire pharmacy IT capabilities. Veterans Access to Care (scheduling and consults). This program office is responsible for standardizing and coordinating system-wide administrative clinic operations and management. Specifically, the Executive Director serves as VHA’s business owner and manager in collaboration with OI&T on matters regarding scheduling, including the department’s electronic outpatient scheduling system. Community Care. This program office is responsible for overseeing all VHA community care programs and business processes, such as determining veterans’ eligibility to receive health care benefits and purchasing care from non-VA providers. Specifically, it is structured around six functional areas: eligibility, referral and authorization, a tiered network of community providers, care coordination, provider payment, and customer service. As previously mentioned, an independent assessment recently noted that VHA and OI&T faced a number of challenges in collaborating to execute health IT improvements and developing new and modernized capabilities. Specifically, in response to The Veterans Access, Choice, and Accountability Act of 2014 (Choice Act), the assessment was released in September 2015, stating that VHA and OI&T did not collaborate effectively. The assessment found that VHA and OI&T often did not agree on priorities for executing their strategic plans and have struggled to identify, prioritize, and translate clinical goals and strategic initiatives reflected in the department’s overarching planning documents into buildable, testable health IT requirements that resulted in measurable health care outcomes for the veteran. In addition, the report stated that VA’s ability to deliver new capabilities for VistA had stalled and as a result the VA health care system was in danger of becoming obsolete. The Choice Act also established the Commission on Care (the Commission). This independent entity evaluated veterans’ access to VA health care and assessed how veterans’ care should be organized and delivered during the next 20 years. In its final June 2016 report, the commission acknowledged that, although VHA provided health care that was, in many ways, comparable or better in clinical quality to that generally available in the private sector, the care was inconsistent from facility to facility. According to the commission, health care also could be compromised by poorly functioning operational systems and processes. The commission’s recommendations were intended to serve as a foundation for organizational transformation at VA. We have also issued numerous reports that highlighted challenges facing VA’s efforts to improve IT management. For example, in May 2010, we reported that, after spending an estimated $127 million over 9 years on its outpatient scheduling system project, VA had not implemented any of the planned system’s capabilities and was essentially starting over by beginning a new initiative to build or purchase another scheduling system. We also noted that VA had not developed a project plan or schedule for the new initiative; department officials stated that VA intended to do so after determining whether to build or purchase the new application. We recommended that the department take six actions to improve key systems development and acquisition processes essential to the second outpatient scheduling system effort. The department generally concurred with our recommendations, but has not provided information about its actions to implement four of the six recommendations. In May 2016, we reported that VA’s expenditures for its care in the community programs, the number of veterans for whom VA has purchased care, and the number of claims processed by VHA have all grown considerably in recent years. Due to recent increases in utilization of VA care in the community, the department has had difficulty processing claims in a timely manner. We reported that VA officials and claims processing staff had indicated that IT limitations, manual processes, and staffing challenges delayed claims processing. The department had implemented interim measures to address certain system challenges, but did not expect to deploy solutions to address all challenges, including those related to IT, until fiscal year 2018 or later. Further, VA did not have a sound plan for modernizing its claims processing system, which we recommended it develop. The department concurred with this recommendation and stated that it intended to address the recommendation through the planned consolidation of its care. We have also recently reported on VHA’s efforts to provide outpatient pharmacy services to approximately 6.7 million veterans. Specifically, in June 2017, we reported that pharmacists cannot always efficiently view and share necessary patient data among VHA medical sites and cannot transfer prescriptions to other VHA pharmacies or process prescription refills received from other VHA medical sites through the system. As a result, pharmacists do not have the necessary data to efficiently make clinical decisions about prescriptions, which could negatively affect patient safety. In addition, we noted that VA’s pharmacy system lacks certain capabilities, such as the capability for exchanging prescriptions with non-VHA providers; the system also does not maintain a perpetual inventory capability. Among other actions, we recommended that VA update its pharmacy system to view and receive complete medication data, assess the impact of interoperability, and implement additional industry practices. VA generally concurred with our recommendations. VA has established IT management processes that are partially consistent with leading practices. For example, the department has developed multiple IT strategic plans and related documents that identify its goals. However, these plans and documents do not include performance metrics that the department could use to track progress toward achieving its goals. Additionally, although VHA has an IT investment management process that is consistent with leading practices, VA’s department-level IT investment board has been inactive and investment selection criteria have not been defined. Further, while VHA has defined a business architecture that identifies its core business functions, measurement of the extent to which those functions are supported by IT investments is incomplete. Strategic planning is essential to help an organization define what it seeks to accomplish and identify the strategies it will use to achieve desired results. Our research and experience at federal agencies has shown that an agency must align IT goals with its strategic goals as part of an institutionalized set of management capabilities. An IT strategic plan outlines the agency’s goals and identifies performance metrics that permit the agency to determine whether IT is making a difference in improving performance. The resulting plan effectively guides modernization efforts by serving as an agency’s vision, or road map, and helps align its information resources with its business strategies and investment decisions. OMB has issued guidance for agencies to use in developing and maintaining a strategic plan that describes the agency’s technology and information resource goals, defines the level of performance to be achieved, and demonstrates how the goals align with the agency’s mission and organizational priorities. VA has also issued a directive that requires IT strategic planning to include outcome-oriented performance measures. In accordance with leading practices, the department has produced multiple strategic plans, road maps, and supplementary guidance that describe the strategic direction for IT across the department. For example, OI&T has issued the following documents and guidance, which describe, among other things, the strategic goals and objectives, transformation priorities, and the future vision for VA IT. The Fiscal Year 2013 through 2015 Information Resources Management (IRM) Strategic Plan and an associated Enterprise Roadmap. Together, these documents describe the department’s IT strategic goals and objectives. VA has taken steps to show alignment between the IT strategic goals and objectives and the VA Strategic Plan. For example, the objectives in the IRM Strategic Plan include, among other things, managing the IT portfolio and utilizing performance metrics for informed decision making. In addition, the Enterprise Roadmap describes additional OI&T goals and priorities, as well as select programs that are intended to support those priorities between 2016 and 2018. For example, the roadmap identifies health care modernization as one of VA’s key IT investments. Enterprise Technology Strategic Plan, Fiscal Years 2017 through 2021. OI&T has issued a strategy to achieve VA’s IT vision, which is to lead the department as “a world-class organization that provides a seamless, unified veteran experience through the delivery of state-of- the-art technology.” It sets priorities that are intended to guide decision making at the department. According to the plan, its priorities are in alignment with the MyVA continuous improvement initiative. Further, the plan describes the current technical environment. It also details a vision for a future IT environment that plans to utilize new and emerging technologies to improve information availability, information security, reusable shared services, modern applications, and scalable infrastructure. Multi-Year Programming guidance. OI&T has issued annual guidance for the IT Multi-Year Programming process, which is intended to ensure that the IT appropriation is being directed to those investments that satisfy the most pressing mission requirements of the department. This guidance describes a number of strategic challenges faced by OI&T. For example, the guidance from recent years noted that the retirement of legacy systems and the increasing cost of sustaining those systems were two challenges that should be taken into consideration during the Multi-Year Programming cycle for decisions on IT investments. While OI&T produced these strategic plans, road maps, and supplementary guidance related to IT, none of the documents includes specific results-oriented performance metrics that are called for by VA’s IT strategic planning directive and leading practices. For example, while the IRM Strategic Plan includes a strategic objective related to aligning investments with mission needs, it does not describe or point to a specific target to be achieved, and related performance metrics for how progress against this target will be measured. In addition, VHA has taken steps to define a strategic direction for health IT by issuing its Health Information Strategic Plan for Veterans Health Administration Supporting VA Health Care Version 4.3 (HISP). According to the HISP, this strategy is to inform OI&T’s IRM Strategic Plan. The HISP identifies strategic goals and objectives related to health IT within VHA. For example, one strategic goal included in the plan is to enhance health information processes and practices to ensure that VA health systems are efficient and cost effective, and have the capability needed to deliver quality medical care to veterans. According to the plan, two objectives for achieving this goal are to implement IT innovations that support efficiency in business operations, such as digitalization of business processes through the use of sensors or other monitoring and automation systems, and to implement a performance measurement capability to monitor and drive a culture of quality and safety. However, VHA’s HISP does not identify corresponding performance targets and metrics for strategic goals and objectives identified in the plan. Further, this lack of performance targets and metrics has been a longstanding issue. For example, a previous version of the plan stated that a workgroup was established in October 2012 to identify performance goals and to create an initial report by May 2013. According to VHA officials, while VHA established a workgroup in October 2012 to identify performance metrics, the workgroup’s recommendations were not adopted. OI&T officials acknowledged that the department’s strategic plans and related documents do not contain performance targets and metrics, but said that VA does report outcome-based operational performance metrics for each major IT investment to OMB’s IT Dashboard. However, these metrics are not specific to the IT goals and objectives outlined in the IRM Strategic Plan and, thus, do not help report how VA is progressing toward achieving its strategic goals and objectives. Further, according to VHA officials, VHA offices are not staffed to identify, track, and report on IT performance measures. Because VA’s IT strategic plans do not identify performance metrics that could be used to track progress toward strategic goals and objectives, VA and VHA lack the ability to accurately track progress toward providing IT systems that address VHA’s business needs and support the performance of its mission. According to leading practices for IT investment management, establishing and following a systematic and organized approach to investment management helps lay the foundation for successful, predictable, and repeatable investment decisions. Critical elements include instituting an IT investment board and ensuring that an organization develops the process by which IT investments are selected, reselected, and integrated with the process of identifying projects for funding. Depending on its size, structure, and culture, an organization may have more than one IT investment board and each investment board may operate in accordance with its assigned authority and responsibility. In addition, the investment selection process should include structured reviews of IT proposals, the use of predetermined criteria for analyzing and prioritizing proposals, and analysis and documentation of decisions made to fund some proposals and not others. VA has taken steps to establish a systematic and organized approach to IT investment management. Specifically, the department has integrated its investment management approach with its IT Multi-Year Programming cycle, which is the process used by OI&T to identify and prioritize business needs over a 5-year programming horizon. With the VA budget submission and data collected from the prior Multi-Year Programming cycle as the starting point for the annual process, OI&T uses the list of priorities from VHA, VBA, and NCA to develop an initial IT Program. VA has also instituted multiple levels of investment management, including establishing IT governance and a selection approach in VHA, in addition to department-level IT investment review boards. Within VHA, the administration has developed a governance structure for prioritizing business needs and selecting its IT investments based on those needs. This structure, formally established in November 2015, includes the following components. Capability management boards: These four boards generally meet monthly to engage with program offices and assess and rank the priority of various business needs by scoring them with weighted criteria related to, for example, how the proposal aligns with VHA mission priorities and the expected benefits as well as the impact of risk to VHA, the maturity of requirements, the complexity of the issue, and the dependencies between individual investments. Integration Board: The co-chairs of each of the capability management boards generally meet monthly as the Integration Board to ensure that the prioritized lists submitted by each capability management board are consistent and that dependencies between the proposals are assessed. The Integration Board begins to incorporate cost estimates into the process, develops alternative scenarios for prioritization that anticipate OI&T budget allocations, and recommends a consolidated list of investment priorities to the IT Committee. IT Committee: This committee is charged with setting VHA’s IT strategic direction, overseeing its IT governance and needs prioritization process, and advocating for VHA’s IT funding. Further, this committee is part of the National Leadership Council and is responsible for coordinating with the Council’s other committees to ensure that IT needs are appropriately supported with funding that is consistent with VHA goals, and resolves issues in the execution of the budget to include reprogramming, as appropriate. The committee provides a final list of prioritized investments to the National Leadership Council as part of the Multi-Year Programming process. National Leadership Council and Under Secretary for Health: The National Leadership Council is VHA’s advisory body for decision making and is comprised of senior VHA leaders, including those within the Office of the Under Secretary for Health. This body is responsible for endorsing the VHA-related IT investment decisions that are submitted to OI&T. According to VHA officials, the administration negotiates with senior executives such as the Deputy Secretary of VA and the CIO in building the budget request that goes to OMB. VHA’s Architecture and Requirements Investment Work Group supports these governance boards by, for example, normalizing and analyzing the submitted IT needs and providing data and cost estimates to help the governance bodies make informed decisions. (See figure 3 for a depiction of VHA’s IT governance structure.) For its part, VA’s department-level IT governance is comprised of two boards that are assigned the responsibility of combining the business needs from VHA and the other business partners and formulating a final IT budget according to department-wide priorities. According to OI&T’s IT Multi-Year Programming guidance, the initial list of programs and their associated funding levels proceeds through these boards for additional review, adjustment, and approval. IT Leadership Board: According to its charter, VA’s highest level IT investment board is responsible for, among other things, aligning IT resources with business needs, managing the projects, and developing and approving the IT budget. IT Planning, Programming, Budgeting and Execution Board: The charter for this board states that it is to help facilitate the Multi-Year Programming process, monitor budget execution, and make recommendations to the IT Leadership Board regarding overall long- term plans. According to VA officials, this board also is to make determinations on what projects are eligible for funding with the IT appropriation. However, the IT Leadership Board has not met since July 2015 and is not currently functioning as the department-level IT investment board. Further, VA has not documented criteria that the board could have used to weigh tradeoffs between investments, determine whether one investment is funded over another, or identify how investments are reselected once they are operational. Because the board has not met, OI&T officials stated that an ad hoc group of senior executives was delegated responsibility for making IT investment decisions for the fiscal years 2017-2021 Multi-Year Programming cycle. However, VA did not document the criteria that these groups used to make decisions, nor did the groups document their decisions. For example, there was no documentation of the department’s decision to not approve VHA’s high-priority request for $45.8 million in proposed development funding to improve pharmacy IT capabilities in the fiscal year 2017 cycle. According to OI&T officials, VA has been working to change its approach to department-level IT governance and investment selection as part of the ongoing transformation that has been occurring since January 2016. Among these changes, OI&T chartered 11 new governance boards by October 2016 that are to focus on various aspects of IT strategy, solutions, and standards. One of these boards—the Portfolio Investment Management Board—has been identified by its charter as the department-level IT investment review board to be responsible for integrating IT investment decisions with VA’s mission, strategic plan, budget, and enterprise architecture. While the Portfolio Investment Management Board has been defined as the department-level decision-making body, officials said more time is needed to determine how the board’s responsibilities will be carried out in relationship to the other 10 boards, which also are responsible for various aspects of IT projects, planning, and budgets. In addition, the Portfolio Investment Review Board and OI&T have not issued additional guidance or other documentation related to how the new IT governance structure will work to oversee management of IT across the department. While the transformation of OI&T has the potential to improve the selection of IT investments going forward, the department has not yet documented criteria related to how decisions and tradeoffs will be made or fully demonstrated how the new structure will work. According to OI&T officials, the transformation of IT governance is an evolving process and they plan to continue to improve the process for selecting IT investments and the budgeting process as the department builds the upcoming fiscal year 2019 through fiscal year 2023 budget submissions. However, without using a department-level board to govern IT investments and criteria for selecting them, the department risks wasting limited resources and funding investments that may not fully support VHA’s most important business functions and priorities. Leading enterprise architecture and investment management practices maintain that enterprise architecture can be used to link the organization’s strategic mission value (performance results and outcomes) to its technical resources. As such, organizations should implement a methodology for ensuring that IT investments meet business needs and comply with the architecture. In addition, the extent to which mission value is actually realized indicates progress toward the desired state defined in the architecture and should be periodically measured and reported. VHA has employed a methodology to identify its core business functions in its enterprise architecture and has documented guidance for aligning or mapping IT needs and investments to those functions. These activities are performed to ensure that there is a link between what is reviewed during the investment-selection process and the business needs of the organization. Specifically, according to administration officials, the VHA Business Function Framework is the architectural model that describes the core business functions that are necessary to the mission of delivering health care services and supporting the needs of veterans, health care providers, and resource partners. This framework defines a total of 262 core business functions as part of the VHA business architecture. For example, one line of business described by the framework is “Deliver Health Care.” Under this line of business, there are 86 supporting functions, such as “Provide Clinical Decision Support” and “Provide Nursing Services,” which identify at a high level the core business functions necessary to deliver health care at VA. According to VHA, the Business Function Framework is primarily used to show how business functions map to new service requests, requirements, and IT systems, the results of which are input into the NSR database and VASI. For example, VHA maps IT needs and investments (which can include multiple systems) to the Business Function Framework. VHA officials noted that every IT need and system is intended to be mapped to one or more of the defined business functions. VHA has also taken steps toward measuring and reporting the extent to which mission value is actually realized. Specifically, the administration has mapped core business functions to existing clinical, operational, and outcome measures. According to the VHA Business Architecture team, available performance metrics were aligned to a number of core business functions for the fiscal years 2017 and 2018 reviews and the results were provided to VHA capability management boards and could be viewed by board members. In instances where a metric indicated poor performance, proposed investments were assessed for their potential to help VHA improve its performance. Nevertheless, measurement of the extent to which business functions are supported is incomplete. Specifically, VHA has aligned existing metrics with 65 of the 262 core business functions for the fiscal year 2017 Multi- Year Programming cycle. For fiscal year 2018, the team reported that it aligned metrics to 64 of the core business functions. According to VHA officials, the Business Architecture team would like to identify additional operational metrics used by VHA. However, the officials stated that the Business Architecture team is not staffed to identify, track, trace, and report on IT performance metrics and that IT metrics are OI&T’s responsibility. Without aligning additional metrics to all core business functions, VHA is not positioned to effectively gauge the extent to which IT systems address its business needs and support the performance of its mission. VA’s IT systems are generally aligned to VHA core business functions, but the administration has unaddressed needs that indicate current IT systems do not fully support the functions. To have an effective internal control system, an organization should design its information systems to achieve its objectives. The management processes discussed in this report (i.e., strategic planning, investment management, and enterprise architecture) are intended to help ensure that the department’s investment decisions for IT systems address VHA’s strategic and functional needs. VASI shows that the vast majority of VHA’s 262 core business functions are supported by the department’s current IT systems or, according to department officials, do not have a need for system support. However, our review of new service requests, which are requests in the NSR database for identified IT needs submitted by VHA programs and business owners, determined that VHA’s core business functions are not fully supported by systems. The NSR database contains needs that have been submitted over time that have not been addressed by an IT system and provide an indication of functions that are not fully supported by systems. In this regard, as of October 2016, VHA had 2,772 requests for IT needs documented in the NSR database since 1998. Of these, approximately 817 were open requests—IT needs identified throughout VHA that had not been met. Further, 316, or about 39 percent, of these open needs are long-standing—they have been open for more than 5 years. Figure 4 provides a breakdown of new service requests as of October 2016. According to department officials, requests are not of equal weight and vary in level of impact and work effort required. For example, the NSR database consists of requests ranging from the creation or modification of reports to the development of new systems. Nonetheless, these requests represent business needs that have not been met, which means there is functionality that is not being provided. The fact that business functions are not fully supported is further illustrated when reviewing needs associated with three program areas— pharmacy benefits management, scheduling, and community care— which all have open requests that represent long-standing, unmet IT needs. These programs are responsible for key functions and IT systems related to health service delivery. Pharmacy Benefits Management Services. As of November 2016, the program office tracked more than 280 open requests to meet IT needs, approximately 38 percent of which were identified 5 or more years ago. For example, the office had a request from 2000 for the development of an inpatient pharmacy order interface to share pharmacy order information with external and commercial systems. In addition, the office had a 2013 request related to a project intended to enhance and modernize VistA Evolution Pharmacy. It also had two requests from 2014 related to a project intended to develop the ability to receive inbound electronic prescriptions and a project intended to address known patient safety issues. Veterans Access to Care (scheduling and consults). As of late September 2016, the program office had more than 20 open requests. Approximately 32 percent of these requests were entered into the NSR database more than 5 years ago. For example, the program office tracked two requests from 2006 related to recommendations made by a VHA Consult Task Force group. The group was created in August 2004 to address disconnects among the consult package, the scheduling package, and the electronic wait list. In addition, the office continues to track a request made in 2007 for the development of a scheduling application to address deficiencies including wait times, resource management, and user satisfaction in order to improve coordination of patient care. This significant long-standing request remains open after a decade without plans for when and how an IT solution will be developed to address this business need. Community Care. The program office, which has been established more recently than pharmacy benefits management or scheduling, was tracking more than 50 open requests as of late September 2016. Approximately 30 percent of these requests were entered into the NSR database more than 5 years ago but were still considered relevant to the community care program. For example, the office tracked a request from 2006, related to the IT solution for flagging emergency care claims. This request had been unaddressed for more than 10 years and the absence of such a system resulted in labor-intensive and error-prone manual processes. Program officials stated that an IT solution to address this is scheduled for release by the end of 2017. Multiple factors have contributed to VHA’s core business functions not being entirely supported by the department’s IT systems. VA spends a significant amount of money on sustaining existing systems, which department officials said has limited the funds available for enhancing or modernizing those systems or acquiring new systems to address VHA’s unmet needs. Furthermore, according to department officials, VHA is challenged because the administration has more business needs than available resources and funding. Additionally, weaknesses in IT strategic planning, investment management, and enterprise architecture processes previously discussed in this report have contributed to a lack of understanding of the extent to which VHA’s business functions require additional IT system support to meet the needs and strategic goals of the administration. As a result, the department risks continuing to make investment decisions and tradeoffs that may fail to address gaps in IT support within its resource limits and may hinder the progress VHA is able to make in improving delivery of health care services to veterans. To VA’s credit, the department’s IT strategic plans describe a vision and identify goals and objectives related to IT in general and to health IT within VHA. VHA and OI&T also have established a governance structure responsible for prioritizing the administration’s business needs and reviewing IT investments for inclusion in the budget. In addition, VHA’s core business functions are documented in its enterprise architecture and used to align business needs to IT investments as part of selecting investments. However, VA’s partial implementation of effective IT strategic planning, investment management, and enterprise architecture has put the department at risk of being unable to fully support VHA with the information systems it needs to perform its mission of providing high- quality health care to veterans. Weaknesses in key processes leave VA unable to gauge the extent to which it is providing information systems that meet VHA’s needs. Specifically, the department has not assigned targets or established metrics for measuring performance toward achieving its strategic planning objectives. In addition, VA’s department- level IT investment management activities have lacked implementation of governance boards, application of selection criteria, and documentation of investment decisions. Also, VHA has only aligned metrics with about one quarter of the core business functions identified in its enterprise architecture. According to OI&T officials, ongoing transformation of IT governance is intended to improve the process by which investments are made. However, the results of this transformation have yet to be fully documented and demonstrated. Thus, VA is not well positioned to meet VHA’s information system needs. Not surprisingly, VHA’s IT systems fall short of meeting the needs of clinicians and the veterans they are to serve. While the administration’s core business functions have been aligned to at least one of the department’s current information systems, unaddressed business needs remain and indicate that the functions are not fully supported. Further, within three VHA program areas—pharmacy benefits management, scheduling, and community care—many identified IT needs have been unresolved or unfunded for 5 or more years. Thus despite identifying and prioritizing needs, VHA’s core business functions have not been fully supported by the department’s current information systems and may remain unaddressed for a considerable amount of time. Until the department fully implements IT management processes in accordance with leading practices, it will lack assurance that its information systems fully support VHA’s core business functions and delivery of health care services to veterans. To assist VA in improving key IT management processes to ensure that investments support the delivery of health care services, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health and the Chief Information Officer to take the following four actions: Identify performance metrics and associated targets for the goals and objectives in the department’s IT strategic plans, including the Information Resources Management strategic plan and the Health Information Strategic Plan, as they relate to the delivery of health IT and the VHA mission. Ensure that the department-level investment review structure is implemented as planned and that guidance on the IT governance process is documented and identifies criteria for selecting new investments, and reselecting investments currently operational at VHA. Identify additional performance metrics to align with VHA’s core business functions, and then use these metrics to determine the extent to which the department’s IT systems support performance of VHA’s mission. Ensure that unmet IT needs identified by key program areas— pharmacy benefits management, scheduling, and community care— are addressed appropriately and that related business functions are supported by IT systems to the extent required. In written comments on a draft of this report (reprinted in appendix II), VA agreed with our four recommendations. The department also provided information on actions it has taken or planned to implement our recommendations, including target completion dates for those actions. For example, in its comments, VA asserted that it has taken steps that fully addressed our recommendation to ensure that its department-level investment review structure is implemented as planned, and that guidance on the IT governance process is documented and identifies criteria for selecting new investments and reselecting investments that are currently operational at VHA. Specifically, the department noted that it had established a new governance process in October 2016 and implemented it as planned. Further, the department provided, as an attachment to its comments, an updated charter for the Portfolio Investment Management Board (dated March 28, 2017) as additional evidence of the board’s process for evaluating IT investments. In our follow up on the department’s implementation of our recommendations, we will assess whether the actions noted are fully responsive to this recommendation. The department also discussed planned actions for addressing our recommendation related to identifying performance metrics and targets for the goals and objectives in VA’s IT strategic plans. Specifically, the department described its intention to develop or revise and maintain performance metrics that align with strategic and health IT goals and objectives. VA also outlined steps the department intends to take in response to our recommendation that it identify additional metrics to align with VHA’s core business functions and then use these metrics to determine the extent to which the department’s IT systems support VHA’s mission. These steps include developing a set of core metrics to provide continuous input into investment portfolio decisions and establishing a methodology for ensuring that IT investments are aligned to business needs and that expected outcomes are defined prior to making the investments. Further, in response to our recommendation that it ensure that unmet IT needs for the pharmacy benefits management, scheduling, and community care program areas are addressed appropriately, the department stated that VHA leadership has recently reviewed all outstanding requests from these program areas to confirm their validity. In addition, the department stated that it plans to include the outstanding needs of these key program areas in its VHA IT Requirements Governance Process during fiscal year 2018 to ensure the needs are addressed in this multi-year planning review. According to VA, its actions in response to our recommendations are expected to be completed by the end of fiscal year 2018. If the department ensures that these and other activities it identified are appropriately documented and effectively implemented, then VA should be better informed to make IT investment decisions that improve the delivery of health care services to veterans. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, the Chief Information Officer, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this study were to determine the extent to which the Department of Veterans Affairs’ (VA) (1) information technology (IT) management processes are consistent with leading practices and (2) current IT systems support the Veterans Health Administration’s (VHA) core business functions. To address the first objective, we compared VA’s IT management processes for IT strategic planning, investment management, and enterprise architecture to leading practices that federal statutes, prior GAO reports, and the Office of Management and Budget (OMB) have identified to assist organizations with improving the management processes. This comparison focused on the specific aspects of the processes that are intended to ensure that IT investments meet the business needs of the VHA organization. For example: IT strategic planning: We identified the strategic plans and related planning guidance issued by the Office of Information and Technology (OI&T) and VHA that focused on IT systems and health care IT at VA. We reviewed the department’s assertions in these plans for how IT strategic goals align to the goals of the VA Strategic Plan. Then, we compared the contents of the plans to leading practices identified from federal statutes, prior GAO reports, guidance from OMB related to IT strategic planning, and a relevant VA directive. In particular, we determined whether VA had taken steps to include strategic goals and objectives that define the levels of performance to be achieved as they relate to ensuring that IT supports the mission needs of the department and VHA; and established related metrics that are specific, verifiable, and measurable. IT investment management: We analyzed charters and meeting minutes establishing and demonstrating the implementation of governance structures responsible for IT investments at VHA and the department level. We then compared the existing governance structure to critical processes and activities related to governance described in GAO’s IT Investment Management framework. We also analyzed department documentation and guidance related to how business needs are identified and prioritized by VHA and selected to be part of the budget for IT investments by OI&T. We examined results of this process for the fiscal year 2017 budget formulation process. We compared our analysis to critical processes related to investment selection described in GAO’s framework. In addition, we interviewed officials familiar with the VHA prioritization process and OI&T investment management and budget formulation processes to clarify department policies and guidance. Enterprise architecture: We analyzed department documentation and interviewed cognizant officials about the steps taken to ensure that IT investments support the department’s business needs and compared our findings to key elements described in GAO’s Enterprise Architecture Management Maturity Framework and IT Investment Management. Further, we compared the number of metrics that were aligned to business functions by VHA to the list of all business functions identified in the enterprise architecture to determine the extent to which the functions have associated metrics available to inform the investment management process. To address the second objective, we examined department data to understand how VA might demonstrate that its IT systems are designed to meet its objectives. First, we analyzed the VHA Business Function Framework (Version 2.11), which documents the VHA functional operations within the business architecture, to compile a list of all core business functions that VHA has determined are necessary to deliver health care. This framework provides the basis by which the department shows relationships between various components of the enterprise architecture and is used to help view, organize, and prioritize VHA’s business activities. We then compared this list of core business functions to data in the VA Systems Inventory (VASI) database, which identifies VA’s current inventory of IT systems and how they are mapped to the VHA core business functions. VHA officials noted that VASI is the authoritative source for business function mapping. We assessed the reliability of data from VASI and determined that the data were reliable for the purposes of our reporting objectives. For any core functions initially not aligned to a current IT system, we reconciled the differences with cognizant VA officials. There were 5 functions (from a total of 262 functions) that could not be reconciled. We determined that this number, which represented less than 2 percent of the total number of functions, was not significant to our findings. While the results of this alignment showed a relationship between many current IT systems and VHA’s core business functions can be demonstrated, the results did not provide insight into how well the functions are being supported by those IT systems. We then analyzed data from VHA’s new service request (NSR) database, which captures information related to business needs such as IT enhancements submitted throughout the department. We analyzed data from the NSR database to identify the number of requests in the database, when requests were entered, and the number of requests that remain open. Our analysis allowed us to describe the number of open requests, but could not provide insight into the depth of work required for requests themselves, nor the weight the business owners assigned to each open need, because the NSR database does not include data on the importance, level of impact, and work effort required to address each request. We found the VA data from VASI and the NSR database to be sufficiently reliable for the purposes of our reporting objectives and used the data as evidence to support our findings, conclusions, and recommendations. For each data set, we reviewed documentation related to the databases, such as the data dictionary, tested the data sets to look for duplicate records and missing data in key fields, and examined the relationship between data elements. We also interviewed department officials about data reliability and internal controls procedures for the database and interviewed knowledgeable officials on the results of our findings. We conducted additional analyses of three programs related to health service delivery on which we have previously reported—Pharmacy Benefits Management Services, Veterans Access to Care (scheduling and consults), and Community Care. Our review of NSRs for the aforementioned program offices included verifying the open NSRs assigned to each program office and interviewing cognizant officials from VHA regarding the IT systems used by the three programs, the needs identification and management process to understand the extent to which VHA business needs are being addressed, and about the extent to which current systems supporting VHA core business functions in their respective areas. The results of this analysis are not generalizable to all functional areas, but provide insight into the extent of IT support for the three specific programs. We conducted this performance audit from December 2015 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. David A. Powner, (202) 512-9286 or pownerd@gao.gov. In addition to the contact named above, Mark Bird (Assistant Director), Jennifer Stavros-Turner (Analyst in Charge), Chris Businsky, Rebecca Eyler, Jacqueline Mai, Dwayne Staten, and Charles Youman made key contributions to this report.
VHA, an administration within VA, provides a broad range of primary care, specialized care, and related medical and social support services to veterans. In doing so, VHA operates one of the nation's largest health care systems through 168 VA medical centers and more than 1,000 outpatient facilities. The administration managed total budget resources reported at nearly $91 billion in fiscal year 2016. Based on interest in VHA's ability to oversee its health care system and provide timely care, GAO reviewed IT management at VHA. Specifically, GAO determined the extent to which VA's (1) IT management processes are consistent with leading practices and (2) current IT systems support VHA's core business functions. To do so, GAO analyzed documentation and interviewed officials about VA's approach to IT management processes related to strategic planning, investment management, and enterprise architecture, and compared VA's processes to leading practices. In addition, GAO reviewed data related to VA's IT systems and VHA's IT business needs. GAO further reviewed IT needs from three key VHA program areas. The Department of Veterans Affairs (VA) has established information technology (IT) management processes that are partially consistent with leading practices. VA has issued strategic plans that identify goals and objectives related to health IT; established investment review boards at the department-level and within the Veterans Health Administration (VHA) that are responsible for selecting IT investments aligned to VHA priorities; and documented VHA's core business functions within an enterprise architecture. However, the IT strategic plans do not include performance measures and targets for their defined objectives, VA's department-level IT investment board has been inactive and its investment selection guidance lacks criteria, and the department has not fully identified metrics aligned to core business functions to inform investment decisions. Until VA can improve these processes, it risks having IT systems that may not fully support VHA's mission. IT systems at VA are generally aligned to core business functions defined by VHA; however, among new service requests, which identify unmet needs of business owners, 817 out of a total of 2,772 IT needs identified for VHA since 1998 had not been met as of October 2016. About 39 percent of these open requests had been open for more than 5 years. GAO's review of the business needs identified in three key program areas—Pharmacy Benefits Management, Veterans Access to Care, and Community Care—showed a number of long-standing needs. According to VA officials, their need to balance the resources for IT needs across the department is a reason that business needs have remained unresolved. Until VA prioritizes resources to address these needs, VHA's programs may not be well supported by IT systems capable of delivering health care services consistent with its objectives. GAO is recommending that VA address the deficiencies identified with IT strategic planning, investment management, and enterprise architecture; and ensure that the three programs' IT needs are addressed. VA agreed with GAO's recommendations and described actions planned to address them by the end of fiscal year 2018.
To carry out its missions, DOE relies on contractors for the management, operation, maintenance, and support of its facilities. DOE headquarters and its field offices oversee 34 major contractors at DOE sites throughout the country. The activities that these contractors conduct serve a variety of DOE missions, such as managing environmental cleanup, including the safe treatment, storage, and final disposal of radioactive wastes; developing energy technologies for transportation systems, efficient building systems, and utilities; and maintaining the safety, security, and reliability of the U.S. nuclear weapons stockpile. In support of these activities, contractors’ staff travel domestically and internationally to collaborate with officials in DOE programs, other federal programs, industry, academia, and foreign countries. All of these trips add up to hundreds of millions of dollars spent on airfare, hotels, meals, and other direct travel expenses. DOE contracts spell out the allowable costs that contractors can charge for travel expenses. Although these contracts vary, the five contractors that we reviewed are generally allowed to provide for employees the actual and reasonable costs for lodging and transportation and a maximum daily amount for meals. Air travel is to be via coach or the lowest discount fare available. However, airfare discounts available to federal government employees are generally not available to contractors. Concerned with the cost of travel in its programs, DOE included travel cost reductions in its 1995 Strategic Alignment and Downsizing Initiative. This initiative aimed to reduce Department-wide funding by $1.7 billion over a 5-year period beginning in fiscal year 1996. The initiative targeted a $175-million cost saving for travel over the same period. This saving would be achieved by maintaining travel costs at a level $35 million below the fiscal year 1995 level. DOE’s fiscal year 1995 travel cost was $307 million, of which $261 million was for contractor travel and $46 million was for federal travel. DOE anticipated a $30 million saving each year from contractor travel and a $5 million annual saving from federal travel. According to DOE officials in the Office of the Chief Financial Officer, these reduction levels represented amounts that the Department believed to be reasonable and achievable savings goals. Travel costs incurred by DOE contractors were reduced in fiscal year 1996, but since then these costs have been increasing. Thirty-four DOE contractors reported that during the fiscal year 1996-98 period, they spent over $700 million on direct travel costs. Annual contractor travel costs were reduced to about $223 million in fiscal year 1996 but increased to about $241 million in fiscal year 1997 and to about $249 million in fiscal year 1998. More than half of the reported travel was incurred by five contractors at DOE’s Oak Ridge, Sandia, Los Alamos, and Livermore facilities. The details on the cost of travel and the number of trips reported by each of the 34 contractors are contained in appendix I. The increase in DOE contractor travel costs since fiscal year 1996 is more dramatic when contrasted with other variables, such as the contractors’ overall funding and staffing. For example, at the same time that travel costs were increasing, funding for contractors was decreasing. Specifically, travel costs increased 12 percent from fiscal year 1996 to fiscal year 1998, while overall funding to contractors decreased by about 1 percent. As a result, travel costs took a larger portion of the contractors’ funding. For each $1,000 of contractor funding, the average amount needed for travel rose from $16.24 in fiscal year 1996 to $18.32 in fiscal year 1998. Similarly, while the number of trips taken remained fairly stable for this period, the number of contractor staff at the facilities decreased about 10 percent, increasing the average number of trips per person. Figure 1 illustrates the trends in travel costs, funding, staffing, and the number of trips over the past 3 fiscal years. The primary domestic and foreign destinations of DOE contractors were Washington, D.C., and Russia, respectively. Most of the travel conducted by contractors—96 percent—was to domestic locations. Trips to Washington, D.C., accounted for about 11 percent of all domestic trips. For fiscal year 1998 alone, 34 DOE contractor sites reported making over 20,000 trips to Washington, D.C., costing at least $20 million. More than percent of these trips were taken by five contractors. For example, Sandia National Laboratory reported taking over 4,500 trips to Washington, D.C., in fiscal year 1998 or the equivalent of about 87 trips each week. Albuquerque, New Mexico, which is the destination for such sites as Sandia and the DOE Albuquerque Operations Office, was the second most frequent domestic destination, accounting for 8 percent of the domestic trips taken. The remaining top destinations were Oakland/San Francisco, California; Las Vegas, Nevada; and Los Alamos, New Mexico. For foreign travel—accounting for 4 percent of the travel—contractors most frequently listed Russia as the top destination. For fiscal year 1996 to fiscal year 1998, DOE contractors took 3,829 trips to Russia, or about 15 percent of all foreign trips. The second most frequent foreign destination was the United Kingdom, which accounted for 6 percent of all foreign trips. The remaining top foreign destinations were Germany, France, and Japan. Costs are increasing for both domestic and foreign travel, but the greatest percent increase is occurring in foreign travel. Although foreign travel represents only 4 percent of the trips, it represents 11 percent of the travel cost. From fiscal year 1996 to fiscal year 1998, foreign travel costs increased by about 53 percent. More frequent trips to Russia have significantly contributed to this increase. The number of trips to Russia increased 107 percent from fiscal year 1996 to fiscal year 1998, and the cost of these trips has more than tripled. Costs increased from about $2.2 million in fiscal year 1996 to about $6.7 million in fiscal year 1998. According to contractor officials, one reason for the increase in foreign travel, particularly to Russia, was that there is a greater emphasis on nuclear nonproliferation work abroad. DOE contractors reported that most travel to domestic and foreign locations was for business purposes, that is, travel for purposes related to the mission of the facilities. This category accounted for about 70 percent of all travel for fiscal years 1996 through 1998. The next most frequent travel category was for attending conferences. The remaining trips were for training, recruitment, and other purposes. Figure 2 provides information on the major travel categories reported by DOE contractor sites. The largest category—business—covered a wide variety of activities. Our review of travel documentation showed that employees take trips to meet with DOE officials, perform field tests, conduct various reviews and inspections related to warhead components, or perform other activities directly related to accomplishing the contractor’s mission. Some trips categorized as business had dual purposes, such as to attend a conference and to conduct meetings with industry. Although it was generally difficult to determine the reasonableness of such trips, we identified some business trips that were not directly related to or needed for accomplishing the facility’s mission. For example, Los Alamos National Laboratory funded a number of trips for its employees to obtain a master of business administration degree, many of which were categorized as business trips. In fiscal year 1998, 24 laboratory employees were enrolled in courses held at the University of New Mexico’s main campus in Albuquerque—about 100 miles from Los Alamos. These employees made at least 380 trips to attend class. Various expenses were incurred, including the cost of overnight hotel stays, rental cars, and meals. For example, one laboratory employee made 38 trips to Albuquerque in fiscal year 1998, spending about $5,321. We brought this practice to the attention of DOE officials, who subsequently determined that the cost of travel and per diem while attending these classes is not justified. These officials have determined that in the future, such costs for travel and per diem will not be allowable under the contract. Attending conferences accounts for the second most frequent travel category. For the 3-year period from fiscal year 1996 through fiscal year 1998, DOE contractors reported making 56,205 trips to conferences—about 15 percent of the categorized trips—costing about $59 million. However, this figure may be understated, since we found that for at least one contractor, some conference trips were categorized as business trips. The DOE Inspector General has raised concerns about the large number of attendees at individual conferences. In a December 1998 report, the DOE Inspector General concluded that some conferences were attended by many DOE contractor participants. The report cited a May 1997 particle accelerator conference in Vancouver, British Columbia, that was attended by 520 DOE contractor employees (as well as 5 DOE employees), resulting in travel costs of about $1 million. In another case, 176 DOE and DOE contractor participants attended a January 1996 human genome conference in Santa Fe, New Mexico. The Inspector General also reported that, contrary to government policy, DOE had no internal procedures to minimize the number of conference attendees. In response to the Inspector General’s report, DOE issued requirements and responsibilities for conference management on March 22, 1999. Among other things, the requirements are intended to better ensure that the number of DOE and contractor employees attending conferences is minimized. DOE is aware of the high costs being incurred for travel and has developed cost-reduction goals to help limit these costs. A substantial amount of these reductions was projected to be obtained from the contractors. However, DOE has had limited success. Although DOE surpassed its goal in fiscal year 1996, it did not reach its annual goals in subsequent years because it did not achieve the travel cost savings that it anticipated from its contractors. To increase cost reductions in contractor travel, DOE and the contractors will have to take additional actions. These could include reducing the number of trips taken by contractor employees, obtaining lower airfares for contractors, and adopting best contractor practices for other allowable costs. DOE and the contractors have taken actions to reduce travel costs. In implementing its travel cost-savings initiative, DOE first set an overall cost-savings target and then allowed its contractors to establish their own cost-savings measures. According to DOE officials, the Department basically established an overall target—a reduction of about $35 million below the travel costs for fiscal year 1995—and conveyed to each contractors specific targets necessary to achieve the total reduction. However, DOE did not establish measures to enforce these targets, nor was it prescriptive as to how these cost reductions were to be achieved. DOE contractors reported to us that they initiated a number of efforts to reduce travel costs. These activities included greater use of videoconferencing to reduce the number of trips and efforts to reduce the costs of airline tickets. For example, some contractors made block purchases of discount airline tickets, increased the use of Saturday night stays for travelers when feasible, and negotiated discounts on airfares. Furthermore, the contractors consolidated travel services and negotiated discounts on hotel rooms. However, while all five contractors that we visited were undertaking some efforts to reduce travel costs, none could provide us with an overall strategy or plan to achieve the initiative’s travel cost-savings targets. Instead, the level and type of effort taken varied by contractor. For example, one contractor reported that as travel costs neared the target, contractor officials directed programs to limit their travel so that their target would not be exceeded. Officials for another contractor told us that they basically do not follow the targets. The contractors have not done their part to meet the target of a $30 million annual reduction in their travel costs. They met the first year’s reduction—achieving a $39 million, or 15-percent, reduction. Since then, however, contractor travel costs have risen each year, and by fiscal year 1998, travel costs were only $16 million—6 percent—below the levels for fiscal year 1995. However, DOE is on track to meet its overall cost-savings goals only because DOE’s federal travel costs have been reduced significantly beyond the expected $5 million annually. Federal travel costs have been reduced each year, and for fiscal year 1998 represent a $15 million, or 32-percent, reduction from the level for fiscal year 1995. Figure 3 shows the amount of travel cost savings in both DOE federal and contractor travel, as compared with the expected savings targets. DOE contractors need to contribute a larger share of travel cost savings in order for DOE to meet its overall travel cost-reduction targets over the next 2 years. Cost savings opportunities could result from improvements in travel management—the overall management of travel and trips taken—and travel cost control—the reduction of costs incurred when on travel. Although some cost-reduction actions are occurring by contractors in these areas, additional efforts are needed to reduce the number of trips and expand best practices for controlling travel costs. The quickest and potentially easiest way to reduce travel costs is to reduce the number of trips taken. During fiscal years 1996-98, even though the number of contractor staff has dropped and some contractors reported that they increased the use of video and teleconferencing, the number of trips taken by DOE contractors had not been reduced. The number of trips was approximately the same for each of the 3 years—about 200,000—according to the data that we obtained from contractors that were able to provide the number of trips for that period. Furthermore, some individuals take many trips. Some contractor employees have taken up to 52 trips in a single year, have been on travel status for over 200 days in a year, and incurred travel costs as high as $96,000. To reduce the number of trips requires effective overall travel management. Yet, there are few contractor management controls over the number of trips taken, which may be reflected in the contractors’ overall lack of success in reducing the number of trips. None of the facilities that we visited had established managerial controls over the extent of travel or set cost targets. In most instances, travel expenses were absorbed into a large program budget, limited only by each specific program’s availability of funds. Although some managers whom we talked with said that they do review proposed travel to ensure that it has a programmatic purpose or limit attendance at conferences, they generally rely on their staffs to take trips only when necessary. In fact, the program managers responsible for approving travel told us that they were unaware of DOE’s cost-reduction targets and therefore did not make specific efforts to reduce travel to meet them. Despite the contractors’ reliance on their employees to limit the number of trips they take, individual travelers stated that they have little control over their travel. They said that much of their travel is dictated by the needs of the organizations providing the funding for their programs. Many staff whom we talked with stated that they had to take trips, particularly to Washington, D.C., that they felt were unnecessary. For example, one senior official from Lawrence Livermore told us that despite alternative options available, such as videoconferencing, he felt compelled to travel to Washington, D.C., 15 times in the past year to attend program meetings or risk a reduction in program funding. Another frequent traveler said that DOE officials ask him to travel to attend meetings, in the event that technical questions might be asked, and if no such questions are asked, he returns home without accomplishing much. In most cases, travelers felt that they had to attend these meetings because they view DOE as their customer and the sponsoring program in Washington wanted their attendance. Contractor staff added that DOE often requires them to travel so that DOE staff do not have to travel, thus reducing DOE’s travel costs while at the same time increasing contractors’ travel costs. In the area of cost control, the biggest single element of travel costs is airfare. For example, about one-half of the travel cost incurred by the contractor at Oak Ridge was for the purchase of airline tickets. In contrast, airfare cost for DOE federal employee travel is much lower—about 35 percent of travel costs. A major reason for this difference is the airfare discounts that the federal government obtains for federal employees. The General Services Administration negotiates and contracts for discount airfares with airlines and generally obtains discounted, unrestricted fares. These discounts, however, are not available to federal contractors, and the cost difference can be substantial. For example, a typical coach-fare flight from San Francisco to Washington, D.C. in September 1998 cost about $200 for a federal employee but about $1,300, on average, for a Lawrence Livermore employee. Efforts to get lower airfare rates have met with limited success. In the past, DOE contacted airlines and requested that they extend their federal discounts to DOE contractors. However, only one airline responded to DOE’s request, and its proposal proved unfeasible. Currently, the General Services Administration is considering plans in 2000 to solicit proposals from airline carriers for airfare rates for government contractors. However, General Services Administration officials are not optimistic that, if a solicitation for contractor airfares is made, the airlines will respond favorably to it. We noted that contractors have had some success in this area. They are negotiating discounts directly with the airlines and have been successful in getting reductions from full-fare rates. Nevertheless, contractors could take additional actions to reduce the airfare costs they are incurring. The most significant action is obtaining nonrefundable tickets. A nonrefundable ticket is a ticket for which the purchase price will not be returned if the trip is canceled. However, the ticket can be exchanged for another for a small additional charge. Nonrefundable tickets are generally less expensive and although the savings will depend on the individual circumstances—such as destination, ticket availability, ticket class, and the number of days the ticket is purchased in advance—they can be substantial. An internal audit report at Pacific Northwest National Laboratory found that the savings on nonrefundable tickets were typically around 50 percent. Specific examples that we identified had also shown significant savings. For example, at Livermore one employee purchased a $1,602 refundable airline ticket to attend a conference while another employee purchased a $414 nonrefundable ticket the next day to the same conference. In another instance, an employee purchased a $473 refundable ticket, also to attend a conference, while another employee purchased a nonrefundable ticket a week later to go to the conference for $255. However, the usage of nonrefundable tickets varied greatly among contractors. For example, Livermore’s travel data showed that about 75 percent of the tickets purchased by travelers were nonrefundable and Sandia estimated that about 65 percent of its tickets were purchased on a nonrefundable basis. However, the percentage for Los Alamos was significantly lower. Los Alamos estimated that its nonrefundable ticket usage at less than 5 percent. The contractors’ travel management staff said that contractor employees are responsible for selecting the flights and tickets that they want to use and that the contractor encourages, but does not require, the use of nonrefundable tickets. They added that employees often do not like to use nonrefundable tickets because their travel plans frequently change or are canceled. Controlling other allowable travel costs that contractor employees incur could further reduce travel expenses. Consistent with its contract with DOE, each contractor has its own allowable rates or criteria for costs that its employees incur for hotels, meals, rental cars, and other incidental expenses. However, in certain instances, costs allowed by some contractors are more generous than those allowed by others, as illustrated below: The contractors at Oak Ridge and Pacific Northwest National Laboratory use federal per diem rates as a general standard for allowable hotel costs. However, Lawrence Livermore and Los Alamos do not and, instead, allow hotel rates that are deemed reasonable. These allowable rates can be significantly higher than the federal lodging rates. We found instances where Lawrence Livermore allowed hotel costs in Washington, D.C., that were $284 per night; in Orlando, Florida, that were $218 per night; in Monterey, California, that were $303 per night; and in Las Vegas, Nevada, that were $176 per night. In each instance, the federal hotel rate, which other contractors follow, would have been over 50 percent less. Both Lawrence Livermore and Los Alamos allow actual daily meal costs of up to $46 on any domestic trip. In contrast, the meal costs allowed by other contractors were up to the rate under federal travel regulations ($30 to $42 per day, depending on location) or, in the case of Oak Ridge, up to $35 per day. Although not all Livermore and Los Alamos employees use the full $46 allowance, we did note instances where the full $46 meal allowance was charged every day. One contractor established a policy that allows employees to stay in higher-priced hotels when attending conferences but does not then allow the travelers rental cars. However, we saw instances where other contractors allowed their employees to stay in higher-priced hotels for conferences and to obtain rental cars. We noted one instance in which two employees from one contractor both went to the same conference in Atlanta, stayed in a hotel costing up to $158 per night, and obtained rental cars. While some of these cost savings, when taken individually, may not be substantial, they could add up to considerable savings when taken together. For example, a reduction of just $100 on the average trip to Washington, D.C., would amount to total yearly savings of over $2 million. Other, more fundamental changes in allowable costs could result in greater travel savings. At least one contractor has established a policy for other allowable costs that resulted in lower rates than the federal per diem. The contractor at Pacific Northwest National Laboratory charged DOE the lower of the actual travel costs incurred by its employees or the federal per diem rate and shared with DOE any cost savings that the contractor obtained below federal per diem rates. During fiscal year 1998, the contractor continued to follow this policy even though this savings incentive program was not included in its contract with DOE. However, according to contractor officials, the fiscal year 1999 contract with DOE again does not provide for this program and it is therefore not being continued. DOE spends millions of dollars on the costs associated with management and operating contractor employees assigned temporarily or permanently to Washington, D.C. In fiscal year 1997, over 800 contractor employees were assigned to Washington, costing $76 million for the employees’ salary, living allowance, relocation cost, and other related expenses. DOE’s Office of Inspector General raised concerns about the Department’s awareness of, and control over, these assignments, and DOE has taken actions to reduce the number of employees on assignments and plans to reduce it further. However, a concern remains about the payments that contractors are making to employees on long-term temporary assignments for their increased tax-related costs. Contractor employees are often assigned to Washington on a temporary—either short-term or long-term (more than 1 year)—or permanent basis to provide technical expertise associated with the stated mission of the employee’s home facility. DOE requires that contractor staff assignments to Washington are not to be for providing administrative or management support, or performing functions reserved for federal employees. Currently, some contractor employees have been in Washington for over 5 years, and at least 14 contractor employees have been there for over 10 years. The costs of employees on assignments are paid by specific DOE programs or, in some cases, are a general administrative expense paid by DOE under the various contracts. The costs to DOE for a contractor employee assigned to work in Washington can be significant, ranging from $5,000 per month to $29,000 per month (or as much as $348,000 per year). The costs for assignments include not only the employee’s salary and benefits and applicable contractor charges but also expenses for moving the employee to Washington and various living allowances provided for the employee while on assignment. The living allowances are provided for employees to offset the expenses that they incur during an assignment. Each contractor has its own formula or methodology for determining this compensation, but it is generally tied to per-diem rates for the Washington area. Under these formulas, the compensation can total $50,000 annually or more. Appendix III provides details on the additional compensation provided for employees on assignment for the five contractors we visited. Concerns about the cost and number of employees on assignment to Washington have been raised by DOE’s Office of Inspector General. In December 1997, the Inspector General reported that DOE was spending at least $76 million annually for field contractor support in Washington, D.C.Furthermore, although the Department was required to maintain an inventory of these employees, it was unaware of the magnitude of contractor personnel in Washington. The Inspector General identified over 800 field contractor employees in Washington—almost twice the number listed in the DOE inventory. Moreover, the Inspector General determined that, contrary to DOE requirements, many contractor employees were providing support and administrative services. DOE has since taken actions to reduce the number of, and improve its controls over, contractor personnel assigned to Washington. The Department has established a policy limiting the use of field contractor employees in Washington and has reduced the number of contractor employees on assignment there. According to a January 1999 DOE report to the House Committee on Appropriations, the Department reduced the number of employees on assignment to Washington by 235 as of January 1998 through attrition, reductions, and reassignments, and by an additional 59 as of January 1999. According to DOE, this brings the level of contractor assignments down to 379. DOE expects to reduce the number of contractor employees assigned to Washington by another 10 percent by the end of fiscal year 1999. DOE is also drafting an order that revises the requirements for the use and management of contractor employees. Although DOE is addressing the issue of contractor employees in Washington, D.C., and reducing their number, concerns still remain about the amount of living allowance that employees are receiving during their assignment. At four of the five facilities we visited, the contractors have a two-tiered living allowance that pays a higher amount for employees on temporary assignments of 1 year or longer. This is because the living expenses provided for employees become taxable when the assignment is longer than 1 year. Consequently, the contractors provide higher additional compensation to offset the tax liability. For example, Los Alamos provides its employees on assignments to Washington, D.C., for longer than 1 year with (1) a basic living allowance of 80 percent of the federal lodging rate for the Washington area and (2) an additional 40 percent of the basic allowance, for a total of about $4,200 per month in fiscal year 1998. Only one contractor we visited—Battelle at DOE’s Richland, Washington location—did not follow this practice. However, Battelle officials said that they are currently requesting that DOE approve a revised living allowance that would include a higher rate for employees on assignments that last longer than 1 year. The allowability of these additional payments, however, is unclear. A DOE Notice provides requirements on headquarters’ use of contractor employees. A specific objective of the notice is to establish limitations on payments to employees whose assignments exceed 1 year. The notice states that, for any assignment that exceeds 365 days, payments to the affected employee for any additional tax burden caused by the long-term assignment is unallowable in accordance with the Department’s acquisition regulations. However, the cited acquisition regulations relate to reimbursed relocation costs for permanent changes of duty—not long-term assignments. DOE’s Office of General Counsel recognizes that the Department does not have a consistent and well-articulated position on allowing contractors to pay for employees’ additional taxes caused by long-term assignments. According to an Office of General Counsel official, there are valid arguments on both sides of the issue. Much depends on (1) the interpretation of contract provisions, or the absence of such provisions, that would make the payments allowable or unallowable and (2) whether, after a certain period of time, a temporary assignment becomes tantamount to a permanent relocation and therefore the relocation rules should apply. According to the General Counsel official, the issue of long-term assignments of contractor employees to headquarters has top-level attention and concern within the Department and is being closely monitored by DOE management. The lack of substantial travel cost reductions from contractors stems largely from a lack of overall travel management by DOE and its contractors. In this regard, DOE has set targets for its contractors to achieve but has not enforced them or ensured that its overall contractor travel cost-savings target was met. For its part, DOE contractors were aware of the targets, but many contractors did not translate this into an overall strategy or plan to achieve lower travel costs. Furthermore, consistent practices for reducing costs have not been put into place. In our view, it is difficult to justify why some contractors allow their staff to stay in high priced hotels, purchase higher priced airline tickets, or charge higher meal costs when others take stronger actions to minimize such costs. Similarly, the payments that contractors are making to employees on long-term temporary assignments for their tax-related costs are also being implemented inconsistently, and the allowability of such costs has not been resolved. A number of relatively simple ways are available to achieve substantial cost reductions. However, a commitment—by both DOE and the contractors—will be required to reduce the number of trips, reduce the cost of airfares, and reduce other allowable travel costs. This means that DOE, as an organization, needs to make clear what cost reductions are expected, contractors need to improve both their travel management and travel cost control, and DOE program areas will have to lessen their travel demands on contractor staff. Furthermore, achieving cost reductions will require that DOE develop clear policies and guidance on the travel-related costs it will deem allowable. To reduce contractor travel costs, consistent with DOE’s cost-reduction targets for travel, we recommend that the Secretary of Energy set travel cost targets for each contractor and require that contractors not exceed these targets. The target amounts should be conveyed to both the contractors and DOE program areas for a combined commitment to ensure that the cost reductions are achieved. Furthermore, to implement more consistent travel cost reimbursement practices, the Secretary should establish clear DOE policy on allowable costs—both travel costs and the reimbursement of tax-related costs—and, when new contracts are let, incorporate the policy into the contracts. DOE agreed that there are additional opportunities to achieve travel cost savings and generally concurred with the report’s recommendations. With regard to the first recommendation, DOE stated that it will establish travel cost targets, in collaboration with program offices and contractors, to ensure a combined commitment to cost reductions. Furthermore, DOE will promote alternatives to travel and will heighten headquarters and field managers’ awareness of the cost of contractor travel to headquarters. However, DOE did not specifically agree to require that contractors not exceed its travel cost targets. In our view, firm targets are necessary to provide DOE with the control needed to ensure that travel costs are effectively managed and that its savings objectives are achieved; consequently, we continue to recommend that DOE require that its contractors not exceed the targets that it establishes. In commenting on the second recommendation—to establish clear DOE policy on allowable travel costs—DOE said it will evaluate the merits of establishing standard rates, such as federal per diem rates, for the reimbursement of contractor travel. DOE also agreed to determine the appropriate treatment of the tax consequences of extended temporary assignments and promulgate departmental guidance, which will be incorporated into new contracts. The complete text of DOE’s comments is included as appendix IV. To determine the amount of travel incurred by DOE contractors, the primary destinations of this travel, and the travel purposes, we collected data from the 34 management and operating contractors identified in DOE’s Strategic Alignment and Downsizing Initiative. We requested data on the cost of travel and the number of trips taken in fiscal years 1996 to 1998, as well as the most frequent travel destinations in each of those fiscal years. We also requested information on the purpose of travel, as well as each contractors’ staffing and funding levels during fiscal years 1996 to 1998. We did not independently verify the data that the contractors provided. We also compared the data with other information available at the five sites we visited—the Oak Ridge National Laboratory and Y-12 Plant in Oak Ridge, Tennessee; the Hanford Reservation in Richland, Washington; the Lawrence Livermore National Laboratory in Livermore, California; the Los Alamos National Laboratory in Los Alamos, New Mexico; and the Sandia National Laboratories in Albuquerque, New Mexico. Such information included contractors’ internal self-assessments of their data and travel systems and internal audit reports. In general, we found that the data were reliable for the purpose for which they were used. At each of the facilities we visited, we reviewed pertinent contracts, regulations, and guidance that detailed the controls over travel costs and the allowability of such costs. We obtained and reviewed internal audit reports on travel costs and the propriety of these costs. We also judgmentally selected contractor employees’ travel vouchers for review to determine if the costs for airfare, hotels, rental cars, and other expenses were appropriate. We met with travel officials at each facility and discussed with them the management of travel costs and the efforts being taken to reduce these costs. Finally, we interviewed travelers, supervisors, and managers to obtain their perspectives on the amount of and need for the travel taken and on the methods for reducing these costs. We obtained and reviewed documentation from DOE on its Strategic Alignment and Downsizing Initiative and its plans for achieving the initiative’s cost-savings goals. We discussed with officials from the Office of the Chief Financial Officer and the Office of Management and Administration the Department’s efforts to reduce contractor travel costs and obtained their viewpoints on the contractors’ control of travel and efforts to meet the current cost-reduction targets. To examine the travel and other costs associated with contractor employees in Washington, D.C., we obtained information at each facility that we visited on the rationale and procedures for approving and conducting off-site assignments and obtained listings of the individuals on such assignments. We also obtained information on the additional compensation provided for employees while on assignments. Furthermore, we discussed contractor assignments to Washington, D.C., with DOE’s Office of General Counsel and with DOE officials in the Office of Management and Administration who are responsible for maintaining the inventory of assignees and for developing management controls for contractor assignments. We obtained and reviewed relevant DOE documents and Inspector General audit reports that addressed the costs and controls over contractor assignments. We conducted our review from August 1998 through March 1999 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will provide copies of this report to Senator Ted Stevens, Chairman, Senate Committee on Appropriations; Senator Robert C. Byrd, Ranking Minority Member, Senate Committee on Appropriations; the Honorable Bill Richardson, Secretary of Energy; and the Honorable Jacob J. Lew, Director, Office of Management and Budget. We will make copies available to others on request. (continued) The table includes subcontractor travel reported to GAO. Universities Research Association, Inc. Fluor Daniel Hanford, Inc. Allied Signal Aerospace FM&T Babcock & Wilcox of Ohio Wackenhut Services, Inc. TRW Environmental Safety Systems, Inc. Lockheed Martin Energy Research Corporation Lockheed Martin Energy Systems, Inc. Wackenhut Services, Inc. Stanford Linear Accelerator Center Travel cost per $1000 funding (continued) West Valley Nuclear Services, Inc. Assignments greater than 1 year 55% of current federal travel regulations per diem rate Additional 10% of base pay field premium Additional 10% of base pay location allowance 100% of current federal travel regulations rate Additional 10% of base pay field premium Additional 10% of base pay location allowance 25% of base pay cost of living differential 55% of current federal travel regulations per diem rate 10% of base pay assignment allowance 100% of current federal travel regulations rate 10% of base pay assignment allowance $1,000 miscellaneous allowance 15%-18% of base pay living differential (declining to zero after 5 years) $1,000 miscellaneous allowance 55% of current federal travel regulations per diem rate 80% of current federal housing allowance 40% “plus-up” of housing allowance to cover additional tax liabilities 55% of current federal travel regulations rate for stays of 1 to 6 months Actual and reasonable costs for stays of 6 to 12 months $1,000 miscellaneous allowance Actual and reasonable costs, plus an additional allowance to cover additional tax liabilities $1,000 miscellaneous allowance $1,000 miscellaneous allowance 60%-85% of federal travel regulations lodging rate 60%-85% of federal travel regulations lodging rate 20% of base salary cost of living adjustment (declining to zero after 5 years) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) contractor travel costs, and DOE's efforts to reduce these costs, focusing on the: (1) travel costs incurred by DOE contractors and their primary destinations during fiscal years (FY) 1996 through 1998; (2) purpose of this travel; and (3) success that DOE has had in reducing contractor travel costs and additional actions available to reduce these costs further. GAO noted that: (1) travel costs incurred by DOE contractors were reduced from $261 million in FY 1995 to $223 million in FY 1996; (2) since then, travel costs have increased--to $249 million by FY 1998--even though funding to the contractors during this period had been decreasing; (3) about 96 percent of the contractors' travel was to domestic locations, the most frequent of these being Washington, D.C. and the sites of DOE's major laboratories and test facilities: Albuquerque, New Mexico; Oakland/San Francisco, California; Las Vegas, Nevada; and Los Alamos, New Mexico; (4) the most frequent foreign destinations were Russia, the United Kingdom, Germany, France, and Japan; (5) the purpose of most travel was reported as being for business reasons, that is, travel for purposes related to the mission of the facilities; (6) this category included trips to attend meetings or perform research; (7) GAO identified trips that were miscategorized or were of questionable value to DOE; (8) for example, business trips included travel to obtain advanced degrees; (9) the second most frequently cited travel purpose was for attending conferences; (10) according to DOE's Inspector General, the large number of conference attendees is a concern; (11) the Inspector General identified hundreds of DOE contractor staff attending a 1997 conference in Vancouver, British Columbia, resulting in travel costs of about $1 million; (12) DOE's success in reducing contractor travel costs has been limited; (13) although contractor travel costs have increased since FY 1996, they have remained below the FY 1995 level--the level that DOE established as a baseline for calculating contractor travel cost savings; (14) only in FY 1996 did DOE attain the expected $30 million savings in contractor travel by achieving a $38 million reduction in that year; (15) contractors did not continue to achieve such savings because DOE did not enforce its cost reduction targets and some contractors did not have an overall strategy or plan to achieve lower travel costs; (16) DOE spends millions of dollars on travel and other costs for contractor employees on temporary or permanent assignment to Washington, D.C.; (17) DOE has reduced the number of contractor employees in Washington and is planning on further reductions; and (18) however, concerns exist over the additional compensation that contractors are providing for employees on long-term temporary assignments to cover the tax liabilities on their living allowances.
Corporate credit unions occupy a unique niche among financial institutions. They are nonprofit financial cooperatives that are owned by natural person credit unions (that is, credit unions whose members are individuals), and provide lending, investment, and other financial services to these credit unions. For example, corporates offer loans to member credit unions, which in turn use these loans to meet the loan demands of their individual members. However, corporates are not the only financial institutions that provide products and services to credit unions. For example, some credit unions may also obtain loans from Federal Reserve Banks or Federal Home Loan Banks. Additionally, corporates offer credit unions investment products and investment advice, but credit unions can also obtain these services from broker-dealers or investment firms. Finally, corporates also offer automated settlement, securities safekeeping, data processing, accounting, and electronic payment services, which are similar to the correspondent services that large commercial banks have traditionally provided to smaller banks. With an emphasis on safety and liquidity, corporates seek to provide their members with higher returns on their deposits and lower costs on products and services than can be obtained individually elsewhere. However, corporates’ limited ability to generate profits—as nonprofit institutions, owned and controlled by their primary customers—constrains their ability to build a financial cushion against adverse financial conditions or unexpected losses. Since 2000, corporates have experienced deposit inflows from natural person credit unions that increased corporates’ assets and shares. Corporates act as a “liquidity sponge” for the underlying natural person credit union system, and the cyclical rise and fall of corporates’ assets and shares (deposits) are rooted in the deposit flows of the natural person credit unions. Thus, these inflows and outflows of deposits, which are beyond corporates’ control, affect their measures of financial strength—such as profitability and capital ratios. As we discuss later in the report, this has exacerbated the stress on their financial condition. Since 1992, the number of corporates in the corporate network has decreased, with assets more concentrated in larger institutions. (See fig. 1 for an illustration of the network’s geographic distribution.) Mainly as a result of mergers, corporates have decreased in number from 44, at the end of 1992, to 30 as of December 31, 2003, excluding U.S. Central. On average, corporates also have become larger, with the median asset size (excluding U.S. Central) increasing from $450.6 million in 1992 to $1.2 billion at the end of 2003. However, the corporate network still encompasses small and large institutions, ranging in size from $7.3 million in assets to $25 billion, as of December 31, 2003. In addition, asset concentration in the network has become more pronounced since 1992. Excluding U.S. Central, at the end of 1992 the three largest corporates accounted for approximately 42 percent of the corporates’ total assets. By the end of 2003, these corporates accounted for roughly half of corporates’ total assets and the largest corporate accounted for about one-third. As shown in figure 2, the credit union industry is organized into three closely connected groupings. At the “top” or retail level, as of December 31, 2003, there were the 9,488 credit unions that served roughly 82 million individual customers. In the middle are the 30 corporates, which serve credit unions by investing the cash they have not lent out and by providing loans and other financial services to the credit unions. Finally, on the “bottom” or wholesale level is U.S. Central, which provides corporates a range of products and services, similar to those that corporates provide to credit unions. Since their inception, corporates’ primary functions have been to accept deposits and make loans to their members. In addition, today, they also provide investments and other financial services to credit unions, and corporates over time have broadened the types of products and services they offer. Most corporates offer electronic services, such as the Automated Clearing House (ACH) and correspondent services, such as settlements with the Federal Reserve and other financial institutions; check services, including collection and settlement of money orders and traveler’s checks; credit card settlement; and education and training. However, corporates now offer or plan to introduce new products and services such as online training, electronic bill payment, Internet banking, asset/liability management (ALM), and brokerage services. For more detailed information on the products and services corporates offered or planned to offer, see appendix IV. While the first credit union in the United States started in 1909, the first corporate did not start operations until 1968. Many corporates grew out of the various state credit union leagues and initially served only single states or regions. Over time, corporates were granted national fields of membership that allowed them to expand the number of credit unions they served. While corporate credit union membership can be national, corporates can have either a state or federal charter. As of December 31, 2003, 18 of the 31 corporates, including U.S. Central, were state-chartered. In terms of oversight, NCUA has authority for supervision and examination of federally chartered corporates. Under the dual-chartering system, the supervisory authorities for those states that have state-chartered corporates are primarily responsible for supervision of these institutions. However, since all corporates provide deposit, liquidity, and correspondent services to federally insured credit unions, NCUA also has regulatory authority over state-chartered corporates and assesses the risks federally insured, state-chartered corporates and noninsured, state-chartered corporates present to the National Credit Union Share Insurance Fund (NCUSIF). This assessment, which is essentially an examination of the corporates’ operations, is performed jointly with state supervisory authorities during their examinations of state-chartered corporates. Part 704 of NCUA’s regulations, together with the relevant provisions of the Federal Credit Union Act of 1934, constitutes the primary federal regulatory framework for both state- and federally chartered corporates. NCUA first issued Part 704 in 1982. Since our previous report on corporates in 1991, NCUA made significant revisions to Part 704 in 1998 and 2002 relating to risks, capital, investments, and other areas covered by this report. Like other financial institutions, corporates face a challenging business environment that affects their financial condition and is characterized by increasing competition, changing product and service offerings, and rapid technological advances. Moreover, recent pressure from a low-interest-rate environment and rapid growth in assets has put additional stress on the corporate network’s profitability and capital ratios. While net income levels have grown since 2000, corporates’ profitability was lower in 2003 than in 1993. As rapid asset growth negatively impacts profitability, it affects corporates’ ability to generate sufficient retained earnings—the primary component of their capital. As overall capital levels have been rising, corporates have been relying more on less permanent (relatively weaker) forms of capital. Additionally, rapid asset growth and the relatively slower growth in retained earnings has put pressure on corporates’ capital ratios, which could be a cause for concern since capital ratios are an important indicator of financial strength. Growth and changes in corporate investments, such as recent shifts of more of the corporates’ investment portfolios into potentially higher yielding and more volatile securities, may increase interest-rate risk if the investments are not managed properly. In particular, the percentage of corporate investments in obligations of U.S. Central has declined while the percentages of corporates’ investments in privately issued mortgage-related and asset-backed securities have increased. Corporates appear to be managing risk by shifting toward more variable-rate and shorter-term securities, providing a potentially better match for the relatively short-term nature of their members’ deposits. However, a regulatory change effective in 2003 allowed certain corporates to purchase securities with lower credit quality (more credit risk), raising implications for NCUA oversight since this activity may lead to increased risk if it is not managed properly. Corporate credit unions are operating in a challenging business environment characterized by increased competition, pressure to increase returns on their investments in a low-interest-rate environment, and the need to invest in technology and personnel to meet the demands of their credit union members for new and more sophisticated products and services. To obtain the corporates’ views on their business environment, we distributed a questionnaire to the entire network and achieved a 100 percent response rate. The corporates reported that they faced competition from outside the corporate network from entities such as banks, broker-dealers, the Federal Reserve System, and Federal Home Loan Banks. About 87 percent of the corporates reported that they also faced competition from other corporates despite the cooperative nature of the network. In addition, in recent years (since 2000), corporates have received a large inflow of deposits from their natural person credit union members, which had increasing amounts of unloaned funds because of the “flight to safety” that occurred in the wake of the stock market downturn. These inflows increased corporates’ assets, pressuring them to ensure that they received sufficient returns when investing these funds to maintain adequate capital levels and fund operations. However, over the last several years, low interest rates have reduced the returns that corporates could obtain on their investments, which has put stress on their overall profitability. Finally, the corporates stated that they faced a rapidly changing marketplace, particularly related to the increased demands from credit unions for more sophisticated products and services such as electronic banking. The strategies corporates have employed to respond to their challenging business environment can have positive or negative impacts on their overall financial condition. For example, over time, corporates have increasingly invested in securities such as privately issued mortgage-related and asset-backed securities and less so in obligations of U.S.Central, suggesting that they are seeking to enhance the yields on their investments. As corporates shift their investments into potentially higher-yielding securities, the network could face increased risks if individual corporates do not have adequate infrastructure in place to manage risks associated with their investments. Increasing competitive pressures may have encouraged consolidation, through mergers within the network, as corporates sought to achieve economies of scale. Consolidation is likely to continue as 7 of the 30 corporates responding to our questionnaire stated that they were likely or would consider merging in the next 2 years. Industry observers have noted that mergers are an effective strategy to attain economies of scale necessary to afford investments in technology and skilled personnel; however, if poorly implemented, mergers have the potential to impact operating performance. The recent and expected consolidation activities within the network could impact the financial condition of the acquiring corporate, as well as the corporate network. Finally, based on the responses to our questionnaire, corporates reported that they have been forming strategic alliances with other corporates to provide member credit unions with sophisticated products and services such as online banking and business lending services. Industry observers have viewed these alliances as an effective approach to meet the demands of members while distributing the costs among several corporates. However, as corporates move into new areas to meet the demands of their members, corporates need to maintain sufficient retained earnings and capital levels. Despite generally rising net income levels since 1995, the profitability of corporates has declined recently due to the low-interest-rate environment and large inflows of deposits from natural person credit unions. More specifically, as shown in figure 3, while net income of corporates generally fluctuated since 1992, it grew overall since 1995. While profitability generally remained within ranges prevalent in the industry since the mid-1990s, it was lower at the end of 2003 than at the end of 1993. Also, as shown in figure 3, profitability—the net income corporates realize on their assets—was relatively stable in the mid-1990s, but has been trending downward since 2001. Effectively, the recent lower-interest-rate environment has narrowed the difference between what corporates were earning on their investments and what they were paying to their members. (Appendix V provides more details on corporates’ income and operating expenses.) Profitability is an important indicator of financial condition, as it is a key determinant of the sufficiency of a corporate’s retained earnings. Retained earnings are the primary component of a corporate’s capital, representing that corporate’s financial strength and its ability to withstand adverse financial events. The recent trend downward in corporates’ profitability has slowed growth in their retained earnings and capital compared with their assets. The overall level of capital at corporates has steadily increased since 1998, in part due to regulatory changes that allowed corporates to use other, less permanent (or relatively weaker) forms of capital in addition to retained earnings. Corporates have been increasingly relying on these relatively weaker forms of capital. However, since 2000 capital ratios have declined as growth in assets outpaced growth in capital. The increasing reliance on less permanent forms of capital and corporates’ generally constrained ability to build capital in periods of stress raises a potential concern about the financial strength of the corporate network. As shown in figure 4, the overall level of capital at corporates has steadily increased since 1998. This is due in part to regulatory changes that allowed corporates to use other, less permanent (or relatively weaker) forms of capital in addition to retained earnings. Beginning in 1998, Part 704 of NCUA regulations expanded the definition of regulatory capital by defining capital as the sum of reserves and undivided earnings (that is, retained earnings) and permitted corporates to include two other, less permanent forms—paid-in capital and membership capital. More specifically, reserves and undivided earnings include all forms of retained earnings, including regular or statutory reserves and any other appropriations designated by management or regulatory authorities. NCUA currently defines “core capital” for corporates in Part 704 as retained earnings plus paid-in capital. Retained earnings, which are internally generated, are the most permanent form and the primary component of corporates’ capital. Both paid-in capital and membership capital, which are from external sources, are less permanent forms of capital, suggesting they provide a relatively weaker cushion against adverse financial events. Prior to July 1, 2003, paid-in capital was defined as a member deposit account with an initial maturity of at least 20 years. However, NCUA now requires paid-in capital to be a more permanent form of capital (a perpetual dividend account), available to cover losses that exceed reserves and undivided earnings. NCUA had noted that, due to its high cost, paid-in capital would be used by corporates as a bridge during short periods of stress, such as rapid growth, and should not be used for long periods. While NCUA’s redefinition of paid-in capital has increased the relative permanence of this form of capital, membership capital represents funds contributed by members that have either an adjustable balance with a required notice of withdrawal of at least 3 years or are term certificates with a minimum term of 3 years. As such, membership capital is probably best thought of as a form of subordinated debt, which can protect the insurance fund in the event of a corporate failure. As shown in figure 4, corporate capital rose from $2.9 billion in 1998 to $5 billion at the end of 2003. Retained earnings accounted for 41 percent of total capital in 1998 but declined to around 36 percent of total capital at the end of 2003. Paid-in capital increased from around 6 percent of total capital in 1998 to around 10 percent in 2003. Membership capital shares have consistently represented the largest percentage of capital, typically around 50 percent, and have been steadily accounting for a greater percentage of capital since 2000. Thus, while the capital of corporates continues to rise, corporates have increasingly relied on less permanent (that is, relatively weaker) forms of capital. While this is a method corporates can use to increase capital during periods of rapid growth in assets, it does lead to concerns about the ability of the network to withstand financial shocks, especially in light of the increasingly challenging business environment they face. While the total capital of corporates has steadily increased since the late-1990s, since 2000 capital ratios have declined as growth in assets outpaced growth in capital. NCUA currently specifies three capital ratios: the capital ratio, which includes all forms of capital relative to moving daily average net assets (DANA); the core capital ratio, which includes core capital (retained earnings plus paid-in capital) relative to moving DANA; and the retained earnings ratio, which includes reserves plus undivided earnings relative to moving DANA. As depicted in figure 5, these capital ratios were lower in 2003 than in 1998 despite generally rising capital levels. As assets have increased, corporates have been unable to generate sufficient capital to maintain capital ratios. In particular, after peaking in 2000, capital ratios declined, as the corporates’ asset base—which inversely affects the capital ratio—increased by over 80 percent over the same period. Despite recent declines, at the end of 2003 the capital and retained earnings ratios remained in excess of their current respective regulatory requirements of 4 percent and 2 percent. Due to corporates’ role in serving their members, their generally low earnings continue to present a challenge and a potential weakness—as corporates generally rely on building permanent capital from retained earnings—and could put a strain on the profitability of the corporate network in the future. As a result, corporates’ capital ratios, although above current regulatory requirements for safety and soundness purposes, are vulnerable to erosion from factors such as rapid inflows of deposits that corporates may not be able to control. Although assets have grown through the recent influx of deposits, corporates have continued to allocate them almost exclusively to investments (rather than other assets that include cash, loans, or fixed assets). With this growth, the percentage of corporates’ investments in obligations of U.S. Central has declined somewhat, particularly for the largest corporates. In response to the low-interest-rate environment, corporates have moved relatively more of their investments into potentially higher yielding—and more volatile—securities. The largest corporates also appear to be managing interest-rate risk by shifting toward more variable-rate and shorter-term securities, providing a potentially better match for the relatively short-term nature of their members’ deposits. However, a regulatory change effective in 2003 allowed certain corporates to purchase securities with lower credit quality, but few have used this investment authority. It is not clear, however, to what extent corporates might use this investment flexibility in the future, raising implications for NCUA oversight since this activity may lead to increased credit risk if it is not managed properly. Corporates’ investments have grown with the recent inflows of deposits from natural person credit unions. Investments, which include asset-backed securities, commercial debt obligations, mortgage-related issues, and U.S. government obligations, represent the vast majority of corporates’ assets—usually 90 percent or more (see fig. 6). At the end of 1992, total investments of corporates stood at $41.1 billion; at the end of 2003, they were reported at $65.3 billion. Since 2000, total investments of corporates have grown by 84 percent. Since 1992, corporates’ investments in U.S. Central obligations have typically accounted for approximately one-half of their total investments, the largest single investment category. The generally high proportion of investments in U.S. Central obligations reflects the “pass-through” nature of many corporates. Historically, U.S. Central has functioned as a conduit between corporates and the capital markets. Despite growth in the overall amount of corporates’ investments in U.S. Central obligations, they declined as a percentage of corporates’ total investments from 1997 to 2003. For example, they went from $15.6 billion (53 percent) at the end of 1997 to $29.2 billion (45 percent) at the end of 2003. This decline indicates that the largest corporates are investing their funds directly, rather than through U.S. Central. As shown in figure 7, in general, the largest corporates have held smaller percentages of their investments in U.S. Central obligations than smaller corporates. As investment management has increased in complexity, smaller corporates may not have had the resources necessary to develop and maintain investment capabilities internally, and U.S. Central thus was able to provide smaller corporates with these services by leveraging the efficiencies gained through its economies of scale. Despite the recent decline in the percentage of corporates’ investments in U.S. Central obligations, U.S. Central still provides substantial investment services—suggesting that the health of U.S. Central remains critically important for its members and their associated natural person credit unions. In response to the low-interest-rate environment, corporates have moved relatively more of their investments into potentially higher yielding—and more volatile—securities. In particular, corporates have increased their relative holdings of privately issued mortgage-related and asset-backed securities, which may offer higher yields for corporates relative to other investments such as government-guaranteed obligations. As illustrated in table 1, the percentage of investments in privately issued mortgage-related securities increased from 0.9 percent of total investments in 1997 to 14.1 percent in 2003. Asset-backed securities also increased relative to total investments (from 19.5 percent in 1997 to 24.7 percent in 2003). With the potentially higher yields, the corporates are also potentially increasing risk—notably interest-rate risk. This shift highlights the importance of risk monitoring and management by the corporates and NCUA. However, corporates also have shifted the composition of their investment portfolios toward more variable-rate and shorter-term securities, a strategy that tends to reduce adverse exposure to changing interest rates and thus reduces interest-rate risk. While 41.7 percent of corporates’ asset-backed securities were classified as fixed-rate at the end of 1997, 18 percent were so classified at the end of 2003. Since corporates’ call reports do not include weighted-average life data—the expected time that the principal portion of a security will remain outstanding—we reviewed materials from the three largest corporate credit unions that showed these institutions tended to hold securities with relatively short weighted-average lives, with most being less than 3 years. As a result, while corporates have moved to securities that may entail additional investment risk, the largest corporates in the network appear to be managing interest-rate risk by shifting toward more variable-rate and shorter-term securities, providing a potentially better match for the relatively short-term nature of their members’ deposits. Due to the revision of Part 704, some corporates have been allowed to invest in lower-rated securities (down to BBB rated), which might lead to increased credit risk if these investments were not managed properly. Investments with lower credit quality tend to provide higher yields but can also expose investors to the increased likelihood that promised cash flows will not be paid. While “moving down the credit curve” (that is, investing in lower credit quality securities) potentially exposes a corporate to increased credit risk, such a strategy might not increase the overall risk for a corporate making such investments provided the additional risk was managed appropriately. According to NCUA, this regulatory change gave corporates added flexibility with which to diversify their portfolios and reduce investment concentration. In particular, these securities could be used in an attempt to limit credit risk by lowering concentrations in certain industries or geographical areas and creating a more diversified portfolio. Also, lower-rated securities could be purchased because they carried a particularly attractive return for their credit rating or provided a good mix of credit risk and interest-rate risk given the other holdings of a corporate. According to NCUA and corporate officials, the ability to hold such lower-rated securities in their portfolios (as opposed to having to sell a security immediately if it were downgraded) might provide these institutions more flexibility in disposing of an investment that suffered a rating downgrade. Corporates would be able to hold the investment in an effort to limit realized losses rather than being forced to promptly liquidate it. Based on our review of information provided by the three corporates that have the authority to invest in these securities, as well as discussions with their officials and risk management staff, corporates either have made few such investments or none. Further, officials at the three institutions indicated that they did not plan to use their authority to purchase BBB rated securities. However, it is not clear to what extent corporates will take advantage of this investment flexibility in the future, which has implications for NCUA oversight that we discuss later in this report. If corporates were to hold or invest in BBB rated securities to a greater extent, these investments might create additional risks to the corporate network if not managed properly. In general, like other financial institutions, a corporate’s vulnerability to risk depends on its overall portfolio and the amount of capital that is backing it. Some have suggested that corporates tend to be relatively thinly capitalized compared with other financial institutions, which may raise concerns over potential additional exposure to risk. For example, the Department of the Treasury has raised concerns that allowing corporates to invest in BBB rated securities could weaken the safety and soundness of the corporate network because the amount of capital held in the corporatesmight not be commensurate with the risks associated with these lower credit quality investments. NCUA has made numerous changes over the last several years to strengthen its oversight of corporates but faces challenges in such areas as networkwide assessments, obtaining and utilizing technical staff resources, developing merger guidance for corporates, and assuring the quality of corporates’ internal control structures. Specifically, NCUA established a separate office dedicated to the oversight of corporates, and revised its corporate regulation (Part 704) to improve corporates’ management of credit, interest-rate, and liquidity risks. NCUA also adopted a risk-focused supervision and examination approach, and trained or hired a limited number of specialists to help oversee increasingly complex operations at corporates. However, NCUA has not put in place a system to track the resolution of deficiencies or evaluate trends in examination data and therefore may not be able to anticipate emerging issues within the network. Further, NCUA has not systematically considered certain operational risks, such as weak information system controls, when assigning specialists to examinations, which may have led to NCUA overlooking certain problems or not ensuring that problems were corrected in a timely manner. While continued consolidation of the corporate network appears likely, NCUA has not developed merger guidance specific to corporates, and its examiner guidance has not ensured that merger proposals were assessed consistently. Thus, NCUA’s inadequate guidance has increased the risk that resulting decisions may not be in the best interests of corporates or their members, or may negatively affect the safety and soundness of corporates. Also, as corporates have invested in more complex technologies and added more sophisticated products and services, the importance of NCUA’s oversight of corporates’ internal controls has increased. However, corporates are not subject to the internal control reporting requirements imposed on other financial institutions of similar size that help to ensure safety and soundness, as defined under the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA). This raises a question about whether NCUA has the necessary information to assess corporates’ internal controls. Since 1991, NCUA has strengthened its oversight of corporates by reorganizing staff, revising regulations, and changing examination and supervisory focus. NCUA established the Office of Corporate Credit Unions (OCCU) in 1994, partly in response to problems with selected investments at U.S. Central. Within this new office, NCUA centralized its supervision of corporates and increased the number of examiners dedicated to supervision and examination of corporates. According to NCUA officials, prior to this change, NCUA examiners lacked adequate training and expertise to examine activities undertaken by corporates, since they spent most of their time examining natural person credit unions, whose operations generally are less complex than those of corporates. Further, in 1992, NCUA had 12 examiners dedicated to the oversight of 44 corporates and U.S. Central. As of June 2004, NCUA had 22 examiners plus three information systems specialists and one payments system specialist hired to help oversee the 30 corporates and U.S. Central. NCUA also revised its corporate regulation (Part 704) in 1998 to increase measurement and monitoring of interest-rate, liquidity, and credit risk within the corporate network. The revisions to Part 704 were in response to the failure of Capital Corporate Federal Credit Union in January 1995 and GAO and other recommendations for NCUA to improve its oversight of corporates. The 1998 revisions required corporates to measure and report on the impact of interest-rate and liquidity changes on their net economic value. Corporates also were required to change the methods used to calculate their investment concentration limits—moving from a calculation that used an asset base to one that consisted of core capital (reserves and undivided earnings and paid-in-capital). Corporates could use this method to improve their management of credit risk by matching the risks associated with investment concentrations with capital, which protects corporates if investment risks lead to losses. NCUA also implemented a risk-focused supervision and examination approach in 1999 to concentrate its resources on the high-risk areas within corporate operations. Similar to the examination approach taken by other financial institution regulators, the risk-focused approach is intended, in part, to better employ examiner resources and improve examination results by emphasizing the areas of greatest risk. Under this approach, examiners have greater discretion to identify areas that require their attention and allocate their time accordingly. Further, examiners can determine when and where to employ the assistance of specialists with skills tailored to the activities of the institution, as its operations become more complex. According to NCUA officials, OCCU also began to promote examiners who had experience in investments and asset/liability management to the position of capital market specialist. As of August 2004, OCCU had five capital market specialists. Additionally, NCUA’s Office of Strategic Program Support and Planning (OSPSP) had three investment specialists with private-sector financial market experience that could assist OCCU’s capital markets specialists. For example, OSPSP investment specialists participate in selected examinations of corporates that have expanded investment authorities. NCUA’s risk-focused approach has helped it identify weaknesses in corporates’ operations and require corrective actions at corporates; however, we found that NCUA did not methodically aggregate and track the resolution of deficiencies or systematically conduct trend analyses to identify recurrent or networkwide issues. We have reported that sound risk-focused examination practices rely on the regulator’s ability to maintain an awareness of industrywide risk. Other depository institution regulators, such as the Office of the Comptroller of the Currency, the Board of Governors of the Federal Reserve System, and the Office of Thrift Supervision reported that they have mechanisms in place to conduct some degree of industrywide assessments of their depository institutions. Further, the Federal Deposit Insurance Corporation (FDIC) tracks and analyzes trends in examination findings and their resolution in several ways. For example, after each examination, FDIC reviews, analyzes, and enters findings and their resolution into various databases. In addition, FDIC gathers information on its institutions’ internal controls to report on local, regional, and national trends in bank performance and identify activities, products, and risks that affect banks and the banking industry. Based on our review of about 100 risk-focused examinations for all corporates and U.S. Central from January 2001 through December 2003, NCUA examiners had identified deficiencies—most frequently in the areas of asset/liability management, investments, management, funds transfer, and information systems—but we could not always determine if corporates had resolved these deficiencies. NCUA also had established time frames for correcting deficiencies and procedures for corporates to take actions to address the deficiencies. According to NCUA, corporates must prepare plans that specify the action needed and identify the corporate official responsible for implementing the plan. Further, NCUA reported that examiners typically verify the resolution of deficiencies during an examination or on-site supervision, actions that examiners were expected to document in the examination workpapers. Examiners assigned to subsequent examinations also were to review the deficiencies from the last examination report to see what corrective action had been implemented. NCUA reported that based on the severity of the deficiency, as a matter of practice, resolutions might be noted in the examination report or in the workpapers. However, after reviewing these examination reports and other NCUA oversight documents, we were unable to consistently determine whether the deficiencies NCUA had identified for individual corporates had been resolved. The executive summaries included in some examination reports noted that deficiencies from the previous year had been addressed, but this practice was not standard for all of the exam reports we reviewed. For example, 14 of 38 examination reports we reviewed had discussed the status of deficiencies and whether they were resolved. Moreover, the corporate examiners’ guide did not stipulate that examiners should document the resolution of prior deficiencies when preparing the final examination report. NCUA officials told us that the examiner-in-charge tracked the status of deficiencies at individual institutions and reported this information in monthly examiner reports. While these reports documented the status of deficiencies, information on the status was not included or consolidated in monthly reports prepared for the OCCU Director or in quarterly reports to NCUA’s Board. As a result, NCUA management may have been unaware of issues related to the resolution of examination deficiencies, as can be seen in the following examples: In the review of one corporate’s examinations, we noted that its information system disaster recovery site did not meet NCUA requirements (for site location and a separate power system) for at least 3 years. The examination documentation we reviewed did not issue a deficiency finding detailing the weaknesses of the recovery site. After further review, we found that the disaster recovery site was located at the chief executive officer’s home for at least 6 years before the examination report detailed the need to replace the disaster recovery site. At another corporate, NCUA acknowledged in the examination that the institution had not addressed information systems deficiencies related to information security for 3 years. However, in the prior year’s examination, NCUA had no mention of recurring problems with information systems at this corporate. NCUA issued a deficiency finding in the area of accounting and financial reporting for a corporate after it had submitted 13 months of data inaccuracies in its 5310 call reports, exposing the corporate to financial and reputation risk. NCUA management believed that its existing examination processes and available information (such as call reports, examiner reports on corporates, internal monthly management and quarterly reports, and staff’s institutional knowledge) provided it with sufficient information to assess the adequacy and timeliness of corporates’ corrective actions. For example, NCUA officials stated that OCCU management reviews all examination reports prior to issuance, including any noted deficiencies. In the regulator’s view, this practice provides an additional layer of oversight and evaluation. Additionally, OCCU emphasized that its monthly management reports serve as a key supervision tool to assess issues, trends, and corrective action at individual corporates. Despite OCCU’s practices for coordinating and overseeing individual examinations, these practices were informal (that is, we did not identify guidance or formal operating procedures) and appeared to operate independently of one another. Additionally, these processes and practices did not constitute a system that would aggregate the number and type of deficiencies occurring at all corporates. According to NCUA officials, their current practices kept them abreast of potential overall issues affecting the network without the need for a separate system to catalogue the deficiencies. For example, OCCU has trained three corporate program specialists to support field examiners, who track issues and trends in their assigned corporates and meet periodically with OCCU management to discuss issues and trends across the corporate system. They also noted that the examination review process had identified a number of issues or trends such as the need to address Bank Secrecy Act-related issues. However, NCUA officials also said that at the request of their corporate program specialists, they were developing a database to track deficiencies identified in examinations to better track their resolution. They did not specify the planned completion date for this database. NCUA’s current system has relied on interaction between the different offices, examiners, and specialists involved in oversight of corporates. A tracking system may have helped NCUA to identify, anticipate, or otherwise address some of the information system weaknesses we noted above. More specifically, without such a system for tracking examination findings and their resolution, NCUA’s ability to identify the extent and duration of a problem at an individual corporate is limited, which may prevent the timely resolution of deficiencies. Similarly, the lack of a tracking system that aggregates deficiencies diminishes NCUA’s ability to identify networkwide problems readily, assist examiners-in-charge in developing examination plans, and devise strategies to address issues before they become a significant safety and soundness concern. NCUA has not systematically considered corporates’ risk management (both quality and capacity) when allocating resources and scheduling specialists for examinations. Federal depository institution regulators, under the auspices of the Federal Financial Institution Examination Council (FFIEC)—of which NCUA is a member—had issued guidance on how to systematically determine when to assign specialists to an examination. In addition, FDIC and Office of the Comptroller of the Currency (OCC) have issued specific guidance on the frequency with which specialty examinations should be conducted. For example, OCC’s guidance requires that information systems examinations be consistently conducted at least every 12 to 18 months for community banks with assets of less than $1 billion, with a minimum objective of assessing the quantity of transaction risk and the quality of risk management, including staff capacity and skills. By contrast, NCUA has not established a minimum level of involvement of specialists in examinations. Since we noted that NCUA had identified a number of problems in information systems at the corporates and were concerned that they had relatively few specialists in this area, we reviewed FFIEC’s Information Systems Examination Handbook, which also provides a process by which regulators can determine when and where to employ information systems specialists. We used this handbook to assess how NCUA deployed examiners, relative to best practices and guidance as exemplified in the handbook. According to this handbook, information systems examiners must judge risk posed by the quantity of transactions and quality of the institution’s risk management. Assessing aggregate risk allows examiners to weigh the relative importance of both the quantity of transactions and the quality of risk management for a given institution and direct the activities and resources for the regulators’ supervisory strategies. Under this approach, a smaller corporate with a low volume of transactions and weak risk management could pose a risk to the network equal to that of a large corporate with a high volume of transactions and a strong risk management program. We reviewed examination-planning documents for all 31 corporates to determine how NCUA evaluated these risks when determining the frequency at which specialists would be assigned to examinations. We found that NCUA documented its assessment of operations risk in these planning documents, but did not explicitly discuss the quality of risk management for various functions and operations when determining if specialists should be assigned to examinations. For example, in some cases, we found that the examination-planning documents only provided a single line stating that an information systems specialist was not needed on the next examination and did not document the reason for this assessment. While the planning documents were not clear about NCUA’s decision process for assigning specialists to examinations, it also was not clear to us whether NCUA routinely or consistently considered various operational weaknesses at these corporates when assigning specialists. An external review of OCCU in 2002 concluded that NCUA’s complement of two information systems specialists and one payment system specialist did not appear to be sufficient to adequately oversee the corporate network and that OCCU should consider hiring additional specialists in these areas. NCUA believes it has sufficient specialists to examine the 31 corporates; however, it has made this determination without fully assessing the corporates’ business environment and networkwide challenges. NCUA tended to assign specialists on the examinations of larger corporates or those implementing newer systems and less so on examinations of smaller corporates or examinations of established systems at large corporates. According to NCUA, specialists had limited or no involvement in examinations at the 12 smallest corporates (with assets of less than $1 billion) from 2001 to 2003. During the same period, specialists were annually involved in the examination of the eight corporates with assets above $2.6 billion. While this approach appears reasonable, the limited involvement of specialists under such circumstances may have contributed to important information system weaknesses at corporates that were either not identified by NCUA or not promptly corrected. For example, U.S. Central’s automated clearing house (ACH) software had deficiencies that led to a system failure that delayed payments to customers of 2,200 natural person credit unions for nearly 2 days. According to NCUA, information systems specialists had reviewed the system’s performance in prior examinations. However, because the Automated Clearing House (ACH) software was mature and U.S. Central staff was monitoring its performance, NCUA did not consider it a high-risk component of U.S. Central’s operations and had not reviewed it recently. As a result, weaknesses in their backup procedures and routine maintenance, insufficient capacity (noted in prior examinations but not satisfactorily resolved), and other deficiencies that resulted in the outage were not corrected. NCUA has stated that it is reviewing its procedures to determine if such systems should receive a minimum level of review. NCUA has also acknowledged that the ACH delay resulted in financial loss and increased reputation risk to the corporate network. NCUA also faces other obstacles to conducting more systematic evaluations of risk—both in assuring that corporates have the capacity for managing risks and ensuring that, as a regulator, it has the staff to assess the quality and operations in corporates’ risk management functions. As noted previously, corporates have been operating in a challenging investment environment, with additional authorities to make lower-rated investments. Consequently, the quality of risk management at corporates has grown in importance. While the results of our review of the risk management function at the three largest corporates suggested that these corporates were taking appropriate steps to assess and mitigate their risks, these corporates had a relatively small number of staff in their risk management functions. More specifically, the largest corporates and U.S. Central were using sophisticated financial models to assess and manage interest-rate, credit, and liquidity risks. But a rating agency and external auditor have expressed concerns about small staff sizes and how they affect these corporates’ continued ability to evaluate and manage risks and undertake succession planning should key staff leave. The loss of any such staff at a corporate could hamper its ability to undertake the sophisticated analyses needed to evaluate risks. The thinness of corporates’ risk management staffs indicates that NCUA should routinely assess corporates’ investment risk. However, as we noted earlier, NCUA also has a limited number of specialists to conduct comprehensive evaluations of risk management at corporates. Given that the risk-focused approach allows judgment in assigning resources to areas of greatest concern, the thinness of corporates’ risk management staff, in combination with the limited number of specialists at NCUA, suggests that continued attention to corporates’ investment strategies may help to ensure that corporates are adequately undertaking their risk management functions. Therefore, this may require NCUA to reassess its staffing levels and consider the costs and benefits of adding additional examiners or specialists to adequately monitor and oversee the growing complexity of corporates’ operations. As part of its regulatory authority to ensure the safety and soundness of corporates, NCUA reviews and approves corporate merger applications. Some corporates, NCUA, and trade-organization officials indicated that consolidation in the network—as a result of mergers—would likely continue over the next several years. However, with more mergers likely, NCUA has not developed specific guidance for corporates preparing merger proposal packages. In contrast, NCUA has issued guidance for natural person credit unions that provides step-by-step instructions for completing the merger process, and NCUA refers corporates to this guidance. However, NCUA has recognized that this guidance may be insufficient for corporates. In its guidance to examiners, who are responsible for evaluating merger proposal packages, NCUA has suggested that capital ratios unique to corporates, defined in Part 704 of NCUA’s Rules and Regulations, were more appropriate than the probable asset share ratio applicable to natural person credit unions. However, in the guidance on mergers available to corporates on its Web site, NCUA has not indicated that corporates needed to include this information. In our review of five merger packages recently approved by NCUA, we found that three merger packages were initially submitted without the corporate capital ratios defined in Part 704. These merger packages required revision or additional analysis by the corporate and NCUA before the package could be approved, encumbering the approval process. Other regulators such as OCC have provided detailed guidance to banks applying for mergers that listed specific data needed for evaluation and described the regulators’ merger review process. OCC has stated that their approach is intended to avoid misunderstandings and unnecessary delays in the approval. NCUA officials told us they considered several factors when approving corporate mergers such as consolidated budgets and conversion and consolidation plans for information systems that it has not discussed in the natural person credit union guidance. However, only one of the five merger proposals we analyzed was submitted with this additional information. Other corporate’s proposals required revisions or were approved without additional information being provided. One corporate stated they believed the merger process could be improved and made less cumbersome if NCUA provided clearer or more specific guidance for corporates. Finally, NCUA’s guidance did not explicitly discuss how the effects of competition should be considered when approving corporate mergers, which may become more of an issue as the network continues to consolidate and corporates increasingly compete with each other or with other financial institutions. As corporates react to a competitive environment by investing in technology and offering more products and services, NCUA’s oversight of internal controls at corporates becomes even more critical. However, corporates with assets over $500 million were not required to report on the effectiveness of their internal controls for financial reporting. Under the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) and its implementing regulations, banks and thrifts with assets over $500 million are required to prepare an annual management report that contains: a statement of management’s responsibility for preparing the institution’s annual financial statements, for establishing and maintaining an adequate internal control structure and procedures for financial reporting, and for complying with designated laws and regulations relating to safety and soundness; and management’s assessment of the effectiveness of the institution’s internal control structure and procedures for financial reporting as of the end of the fiscal year and the institution’s compliance with the designated safety and soundness laws and regulations during the fiscal year. Additionally, the institution’s independent accountants are required to attest to management’s assertions concerning the effectiveness of the institution’s internal control structure and procedures for financial reporting. The institution’s management report and the accountant’s attestation report must be filed with the institution’s primary federal regulator and any appropriate state depository institution supervisor, and must be available for public inspection. These reports allow depository institution regulators to gain increased assurance about the reliability of financial reporting. The reporting requirement for banks and thrifts under FDICIA is similar to the reporting requirement included in the Sarbanes-Oxley Act of 2002. Under Sarbanes-Oxley, public companies are required to establish and maintain adequate internal control structures and procedures for financial reporting. In addition, a company’s auditor is required to attest to, and report on, the assessment made by company management on the effectiveness of internal controls. As a result of FDICIA and Sarbanes-Oxley, reports on management’s assessment of the effectiveness of internal controls over financial reporting and the independent auditor’s attestation on management’s assessment have become a normal business practice for financial institutions and many companies. While NCUA has issued a letter to corporates indicating that selected provisions of the Sarbanes-Oxley Act of 2002, including the provision on internal control reporting standards, may be appropriate to consider, NCUA has not mandated that corporates adopt this standard. Given that other depository institutions of similar size are required by FDICIA to adhere to the internal control reporting requirements to ensure safety and soundness, NCUA’s lack of such a requirement for corporates raises the question of whether NCUA has the necessary information to adequately assess corporates’ internal controls. This assessment has become more important as corporates’ operations have grown in complexity due to their changing investment strategies, investments in technology, and introduction of new products and services. Increased competition both inside and outside of the credit union system has challenged corporates to explore new technologies and introduce more products and services to retain their members. Increased competition, large fluctuations in inflows and outflows of deposits, in combination with low interest rates, have created potential stress on the financial condition of corporates and U.S. Central. While corporates’ assets have increased rapidly, their ability to increase earnings remained constrained. As a result, corporates have increased their investments in privately issued, mortgage-related and asset-backed securities, which can increase returns but require more sophisticated analysis and monitoring. The change in corporates’ investment profile is another indication of the growing complexity in their operations. Since some corporates have been allowed to invest in lower-rated securities (although few have), this could introduce increased risks in the system if not managed properly. With the changing operating and investment environments, this increases corporates’ potential vulnerability to different financial stresses—and requires that corporates and their regulator, NCUA, place continued attention on their risk-assessment and monitoring strategies. NCUA has made strides in strengthening its oversight of corporates, particularly with the adoption of a risk-focused approach, certain regulatory changes, and the hiring or training of specialists in information and payment systems and capital markets. We believe these actions have helped NCUA to more effectively oversee corporates. However, based on issues we identified, we believe NCUA should do more to anticipate and address emerging network issues. In particular, a tracking system used in conjunction with other measures, such as information from its management and call reports, could provide timely and significant information to NCUA that would help ensure that its risk-focused approach addressed individual as well as networkwide risks. The relatively small number of specialists during a time of increased competition and growing complexity in corporate operations raises additional concerns since NCUA had not systematically incorporated specialists in planning risk-focused examinations, or tracked recurring or pervasive issues throughout the network. We believe this makes it difficult for NCUA to determine the number and type of specialists that are needed or to anticipate problems to adequately monitor or oversee the corporate network. Further, with the continued consolidation in the network, NCUA’s guidance was inadequate to ensure that examiners consistently evaluate proposed corporate mergers. Without sufficient guidance for corporates and examiners, NCUA lacks assurances that decisions on corporate mergers are consistently being made using appropriate criteria and information or that these decisions are consistently being made in the best interests of their members and NCUSIF. We believe that corporates and NCUA examiners would benefit from better guidance since consolidation, through mergers, is likely to continue. The growing complexity in operations and the products that corporates have introduced also raise important concerns about whether NCUA can ensure that corporates’ internal controls, which are central to monitoring operations and risk management, are properly assessed and monitored. However, NCUA has not required corporates to follow the same internal control reporting requirements (defined under FDICIA) as other financial institutions that face similar risks. Finally, the changing profile of the industry introduces both greater opportunities and greater challenges for NCUA, as the regulator of these institutions, to achieve a balance that ensures the network’s ability to introduce beneficial changes and properly manage its risks. To promote a more systematic and consistent approach in NCUA’s oversight of corporates to ensure they are safely providing financial services to natural person credit unions, we recommend that the Chairman of the National Credit Union Administration take the following five actions: Establish a process and structure to ensure more systematic involvement of specialists in identifying and addressing problems and developing and consistently applying policies, and reassess whether there are sufficient specialists to oversee corporates; Track and analyze examination deficiencies on a networkwide basis to identify and track recurring and pervasive issues throughout the network and to ensure that corporates take required corrective actions; Pay increased attention to oversight of corporates’ risk management functions to ensure corporates have sufficient capacity and skills to monitor and manage their risks; Provide specific guidance to corporates for merger proposal packages to ensure they are providing sufficient and relevant information, and improve guidance to examiners to ensure that merger proposals are reviewed consistently and meet the goals of serving members while not placing NCUSIF at undue risk; and Require corporates with assets of $500 million or more to be subject to the internal control reporting requirements of the Federal Deposit Insurance Corporate Improvement Act of 1991 to ensure that corporates are held to the same standards as other financial institutions that face similar risks. We requested comments on a draft of this report from the Chairman of the National Credit Union Administration. We received written comments from NCUA that are summarized below and reprinted in appendix VIII. In addition, we received technical comments from NCUA that we incorporated into the report, as appropriate. NCUA stated that it concurred with most of our assessments and conclusions contained in the report and plans to take actions to implement all but part of one of our recommendations. Specifically, NCUA concurred with the report’s assessment that corporates are operating in an increasingly challenging and competitive environment. In its comments on a draft of this report, NCUA stated that its changes to the corporate rule, made in response to the dynamic financial marketplace, functioned as intended, thus permitting the corporates’ balance sheets to expand and contract, sometimes rapidly, depending on liquidity levels in credit unions, while not compromising safety and soundness. NCUA agreed that the influx of deposits, combined with decreasing interest rates had strained profitability and resulted in lower capital ratios. However, NCUA did not agree with the report’s assessment that paid-in capital and membership capital are “weaker forms of capital.” NCUA restated its requirements for these two forms of capital and, as stated in the report, believed that both paid-in capital and membership shares are available to cover losses that exceed retained earnings, and are not insured by either NCUSIF and cannot be pledged against borrowings. While we agree with NCUA’s statements, as further discussed in the report, we remain concerned that both forms of capital are from external sources and are less permanent than retained earnings, therefore, providing a relatively weaker cushion against adverse financial events. In commenting on corporates’ investments, NCUA believed that the slight potential increase in credit risk exposure due to the 2002 rule change permitting corporates to purchase securities with lower credit quality is more than offset by the rule’s decrease in exposure to credit concentration risk. Additionally, NCUA is of the opinion that the rule’s “modest expansion” of permissible investment graded securities, combined with its reduction in credit concentration limits, results in a stronger corporate network—that corporate management is better positioned to compete, within prudent safety and soundness thresholds, than under the previous rule. NCUA also pointed out in its comments that as of June 30, 2004, 97 percent of the network’s rated long-term securities are rated AAA. Based on the high quality and diversification of the network’s investments, NCUA believes credit risk is minimal. NCUA stated that it has addressed controlling interest-rate risk in the corporate rule and its assessment of the network’s investment portfolio interest-rate risk is minimal. We acknowledged in our report that corporates either have made few or no investments in BBB rated securities, and they indicated that they did not plan to use their authority to purchase such investments. However, it is not clear to what extent corporates will take advantage of this investment flexibility in the future, which has implications for NCUA’s oversight, especially given the thinness of risk-management staff at corporates. Further, we share Treasury’s concerns that allowing corporates to invest in BBB rated securities could weaken the safety and soundness of the corporate network because the amount of capital held in the corporates might not be commensurate with the risks associated with these lower credit quality investments. While NCUA concurred with the report’s recommendation for the need to provide corporates with specific merger guidance to facilitate the regulatory review process, NCUA did not concur with the report’s conclusion that improved guidance to examiners is needed to ensure mergers meet the goals of serving members while not placing NCUSIF at undue risk. NCUA stated in its comments that it has adequate procedures in place, and that every corporate merger package prepared by OCCU is reviewed by NCUA’s Office of General Counsel prior to being presented to the NCUA Board for action. As stated in the report, NCUA officials told us that they considered several factors when approving corporate mergers such as consolidated budgets and conversion plans for information systems that NCUA has not discussed in the natural person credit union guidance. However, we found that only one of the five merger proposals we analyzed was submitted with this additional information and, therefore, we do not believe that NCUA’s guidance to examiners was sufficient to ensure that examiners consistently evaluate corporate mergers. As stated in the report’s conclusions, without sufficient guidance, NCUA lacks assurances that decisions on corporate mergers are consistently made using appropriate criteria and information or that these decisions are made in the best interests of their members and NCUSIF. While clear criteria and consistency in review are important, improving examiner guidance for mergers is also necessary to help protect against forbearance on the part of NCUA. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its issuance. At that time, we will send copies of the report to the Chairman of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; and interested congressional committees. We also will send copies to the National Credit Union Administration and make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or hillmanr@gao.gov or Debra R. Johnson at (202) 512-9603 or johnsond@gao.gov. Key contributors are acknowledged in appendix IX. Our report objectives were to (1) assess the changes in financial condition of corporate credit unions (corporates) since 1992 and (2) assess the National Credit Union Administration’s (NCUA) supervision and oversight of corporates, particularly with regard to how it identifies and addresses safety and soundness issues in the industry. To assess the changes in the financial condition of corporates since 1992, we analyzed corporate credit union call report data, which include balance sheet and income statement data for corporates. Our analysis, based on Forms 5300 and 5310 data supplied by NCUA, included calculating descriptive statistics and key financial ratios and describing trends in financial performance and the structure of the industry. The information included Form 5300 data from the end of 1992 through the end of 1996 and monthly Form 5310 data from January 1997 through December 2003. Our analysis relied upon selected balance sheet and income statement data such as assets, shares, investments, capital, net economic value (NEV), and various income measures and ratios that are commonly used to assess the financial condition of financial institutions. The transition in 1997 from Form 5300 (still used by natural person credit unions) to Form 5310, which is specifically designed for corporates, entailed numerous changes in reporting. Furthermore, significant regulatory changes, effective in 1998, also resulted in numerous changes to the information reported on Form 5310 for 1998. Overall, these changes resulted in the deletion of some items from the financial reports and the addition of others. Subsequently, in some cases the data were not comparable across time. For example, NEV, which is a measure of interest-rate risk, was added to Form 5310 in 1998; thus, we were only able to conduct analysis on this measure from 1998 to 2003. In our prior report on natural person credit unions, we reviewed NCUA’s procedures for verifying the accuracy of the Form 5300 database and found that the data were verified on an annual basis, either during the corporate credit union’s examination, or through off-site supervision. We determined that the data were sufficiently reliable for the purposes of this report. We also performed a data reliability assessment on data from January 1997 through December 2003 for Form 5310, which involved electronic testing of the data and obtaining information from NCUA on its data verification procedures. We found that the data were verified for accuracy on a monthly basis and determined that the data were sufficiently reliable for the purposes of this report. To augment our analysis and obtain a more comprehensive assessment of corporates’ financial condition and risks, we reviewed internal corporate credit union financial analysis reports from selected corporates, independent evaluations of corporate risk controls and models, and external studies of the industry from major rating agencies, such as Fitch, Moody’s, and Standard and Poor’s. We also met with selected NCUA examiners and risk management staff at corporates to better assess how corporates were managing their risks. In addition, we reviewed internal documents and analyses dealing with risk monitoring and control from several corporates in order to assess how well these corporates could assess and manage risk. To assess how NCUA’s supervision of corporates identifies and addresses safety and soundness issues, we conducted a review of key legislative and regulatory changes affecting corporates since 1992. We reviewed NCUA documentation on its risk-focused program, including NCUA examination reports, their corresponding three-year plans, and the Office of Corporate Credit Union (OCCU) management reports for all 31 corporates for 2001- 2003. We conducted interviews with OCCU management and with OCCU examiners-in-charge for 10 corporates. In addition, we visited seven corporates. We developed a structured questionnaire for all 31 corporates to solicit their views on what challenges individual institutions and the collective corporate network faced. We reviewed past GAO and U.S. Department of the Treasury reports on corporates and NCUA, internal reviews of OCCU, and an external review of OCCU performed by an outside auditing firm. We also contacted officials from the Federal Deposit Insurance Corporation (FDIC), the Office of Thrift Supervision (OTS), the Office of the Comptroller of Currency (OCC), and the Board of Governors of the Federal Reserve System. Lastly, we interviewed trade association officials. As part of our legislative review, we reviewed the Federal Credit Union Act to determine the legislative authority for corporates and NCUA’s Part 704, which is the primary regulation governing corporates. Specifically, we reviewed the Federal Register for all changes made to Part 704 since 1992 to understand the rationale behind these changes. We also obtained summaries from NCUA, which provided their rationale for the changes and brief descriptions of the changes to specific sections of Part 704. To assess NCUA’s documentation for its risk-focused program, we reviewed NCUA’s Corporate Examiner’s Guide, which describes the policies and procedures under which examiners are to implement the risk-focused program. The guide describes procedures for off-site monitoring, on-site examinations, information required in an examination report, and coordination with state supervisory authorities for corporates that have a state charter. Also, as part of our assessment of NCUA’s risk-focused examination program, we reviewed about 100 examinations for the 31 currently operating corporates, corresponding 3-year plans, and OCCU monthly management and quarterly reports for the period January 2001 through December 2003. For the review of examinations, we developed a data collection instrument (DCI) to collect 3 years’ worth of information for each of the 31 corporates. The DCI enabled us to aggregate examination areas appearing in a large number of corporates over the time period reviewed that could be potential networkwide issues due to their prevalence or persistence. Examples of findings identified by NCUA in the various examination areas included errors or problems associated with 5310 reporting, accounting procedures, asset/liability management, Bank Secrecy Act compliance, contingency planning, corporate governance, credit analysis, funds transfer, information systems, interest-rate risk, investment, lending, and management. The 3-year plans included information on the last examination and financial profiles—for example, daily average net assets (DANA), capital ratios, and net economic value (NEV). These plans also contained the supervision type of the corporate, supervision plans, Corporate Risk Information System (CRIS) ratings, and in some cases, requests for information system, payment system, or capital market specialists for the next examination or supervision contact. The OCCU monthly management reports covered areas such as OCCU’s administration news, trends in corporates, significant problem case corporates, other significant program issues, miscellaneous corporate information, information on internal or external affairs, board action items, and the next month’s calendar. The quarterly reports provided a brief update of events since the previous report, a summary of corporate network trends, the current status of e-commerce in corporates, specific discussions on 20 percent to 50 percent of individual corporates, and the future outlook for corporates and OCCU during the next quarter and beyond. We met with OCCU management to follow up on questions generated from our review of the examinations, 3-year plans, and OCCU monthly management and quarterly reports. We also selected a judgmental sample of 10 corporates from which to gather additional information about NCUA oversight. These corporates were selected based on asset size, geographic location, charter type, level of expanded investment authority, and significant findings in the examinations. We obtained NCUA’s most recent examiner workpapers for these 10 corporates to review how NCUA supported its findings. We also met with the examiner-in-charge and, when possible, capital market specialists for the 10 corporates to better comprehend their approach to examining the corporate credit union and to understand the support and rationale for some examination findings. In addition, we visited 7 of the 10 corporates to observe and discuss their operations, risk management practices, and interactions with NCUA. We selected these 7 based on their asset size, geographic location, type of charter (state or federal), and whether they had expanded investment authorities. We interviewed senior management and some board and supervisory committee members. We asked structured questions of officials from various departments within the corporates. The departments included investments, risk management, accounting, internal audit, external audit, information systems, and product support. We obtained policies and procedures for various areas within the corporates, including investments, lending, and risk management. We also obtained documentation packages, which were submitted to the asset/liability committees of corporates for some of the institutions we visited, to review investments and their impact on the risk within the corporates. We also observed corporates’ physical environment to determine the types of safeguards that were in place, particularly for information technology. We developed a structured questionnaire to collect information from the corporate network that focused on their perspectives about various components of the industry. We pretested the questionnaire with one of the largest corporates and received numerous meaningful observations about our original version and made refinements. We administered the structured questionnaire to the entire population of active corporates (as of December 31, 2003) as shown in appendix II. Appendix III includes a copy of our structured questionnaire, and appendix IV includes responses to the majority of questions in the questionnaire. The Association of Corporate Credit Unions (ACCU) oversaw the distribution of our structured questionnaire to its 30 corporate members. We administered the questionnaire to the one non-ACCU corporate member. The questionnaires were sent by e-mail at the end of March 2004. We received all responses to our questionnaire by mid-May 2004 and achieved a 100 percent response rate. We conducted follow-up telephone interviews with numerous corporates to obtain clarification on some of their responses. Our questionnaire covered the following areas: products and services that corporates offer to their natural person Credit Union Service Organizations (CUSO), corporate investments with U.S. Central, regulatory changes and impacts on corporates’ operations, the effects of the risk-focused approach on corporates, corporates’ fields of membership, challenges corporates face and their responses to these challenges, and corporates’ immediate and future merger plans. We analyzed the results by summarizing responses or providing simple statistics (for example, range, median, and average) to most of the quantitative questions. Specifically, we conducted quantitative analysis on questions 1, 3, 4, 5, 7, 7a, 11, 12, 13, 14, 17, and 21. We performed content analysis on most of the responses to the qualitative questions. Specifically, we conducted content analysis on questions 8, 9, 10, 16, 19, 20, 21a, and 22. The results of our analysis for most of the questions are presented in appendix IV. To gain a better understanding on the challenges and problems NCUA has faced in overseeing corporates, we reviewed past GAO and U.S. Department of the Treasury reports on corporates and NCUA. These reports also provided recommendations for NCUA to improve its oversight. Additionally, we reviewed internal NCUA reviews on OCCU. These reviews are conducted about every 3 years by OCCU’s Director and staff from outside OCCU, who review OCCU’s operations and suggest improvements. Similarly, NCUA has contracted for an outside party to review OCCU’s operations, and this party also has provided recommendations on improvements in OCCU management and oversight. OCCU’s last external review was completed in 2002. We interviewed officials from the Department of the Treasury and academia who had studied corporates. To obtain information on the experiences of other depository institution regulators with the risk-focused examination and supervision approach, we obtained written responses from officials at FDIC, OTS, OCC, and the Board of Governors of the Federal Reserve System. Finally, to obtain perspectives on the business environment confronting the corporate network and their responses to a changing environment, we interviewed trade association officials from ACCU, the National Association of Federal Credit Unions, including its board of directors, and the National Association of State Credit Union Supervisors. We conducted our work from December 2003 to September 2004 in Alexandria, Virginia, Washington, D.C., and other U.S. cities in accordance with generally accepted government auditing standards. We distributed the following questionnaire to the entire network of corporates in the United States, including both federally and state- chartered institutions, and achieved a 100 percent response rate. (Appendix II lists the 31 corporates active as of December 31, 2003, and whether they are federally or state-chartered.) The questionnaire has three sections: products and services, regulatory changes, and challenges facing corporates. The first section addresses the types of products and services offered by corporates to their members, the issues they faced regarding competition, the type of investment authorities corporates had or sought, and the extent of their investments with U.S. Central Corporate Credit Union. The second section addresses various regulatory issues such as what regulatory changes affected their institution, a description of the corporate’s field of membership (for example, whether they had a national field of membership), and their perception of NCUA’s risk-focused supervisory approach. Finally, the questionnaire solicits the opinions of corporate managers on what future issues the corporate credit union industry faces. Appendix IV contains selected responses to the questionnaire. The U.S. General Accounting Office (GAO), an independent agency of the U.S. Congress, has been asked to review the National Credit Union Administration’s (NCUA) oversight of corporate credit unions on behalf of the Senate Banking committee. As part of this review, we are collecting information from the corporate credit union network. We are sending this questionnaire, via the Association of Corporate Credit Unions, to each of the 31 corporate credit unions, in order to reflect corporate credit unions’ perspective in our study. substantive questions and is largely fact- based for the purpose of allowing the GAO to accurately describe the current makeup of the corporate credit union network. In addition, the survey asks for the opinion of the management of your corporate credit union on the future issues that face the industry. Those questions are included to ensure that the issues covered in our report reflect the perspective of the corporate network. The questionnaire should be answered by the official (or officials) most familiar with the corporate credit union’s operations. Specifically, our review is focused on examining the following three questions: (1) How has the financial condition and function of corporate credit unions changed over time and what have been the effects of that change; (2) To what extent does NCUA’s supervision of corporate credit unions identify and address safety and soundness issues in the industry; and (3) What challenges do corporate credit unions face and what actions are they taking to address these challenges. Your responses and all company information you provide will be treated to protect your privacy and that of the corporate credit union. Responses will be reported in aggregate, and therefore will not be used in any way that would identify you or your corporate credit union. Please return your completed questionnaire to GAO by April 15, 2004. The response should be submitted electronically as a Microsoft Word file attachment sent to May Lee at Leem@gao.gov , or if you like, we could make other arrangements for your submission. Please call May Lee at (415) 904-2182 to make such arrangements. If you have any questions about this survey or the GAO study, please contact José R. Peña, Analyst-in-Charge, at (415)-904-2268. or email him at penajr@gao.gov. Thank you for your participation. I. Products, Services and Investments 1) Please complete the following table describing the products and services that your corporate credit union offered? (Yes or No) five years? in the next two years? Yes Yes Yes Yes Yes 1. Marketing 2. Online Training 3. Rates/Events Updates/News 1. ACH Services 2. Cash Concentration 3. Check Imaging 4. Electronic Bill Payment 5. Funds/Wire Transfers (In & Out) 6. Internet Banking 7. Website Design and Management (continue on next page.) Yes Yes Yes Yes Yes Yes Yes (continuation of Q1.) offered? (Yes or No) five years? in the next two years? F. Funds Management/Financial Services 1. Account Management & Statements 2. Amortizing Certificates (ACP) 3. Asset/Liability Management 4. Brokerage Services 5. CDs 6. Corporate Checking Accounts 7. Derivatives Hedging 8. Dividend Earning Accounts 9. Investment Advisory 10. Loans/Lending 11. Members’ Business Accounts 12. Members’ Capital Accounts 13. Money Market Accounts 14. Open-End and Term Credits 15. Overnight/Cash Management Accounts 16. Savings Bonds 17. Share Certificates (all kinds) Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes 1. Cash Letter Credit 2. CU Service Settlement 3. CUNA Service Group Settlement 4. CUNA Mutual Services 5. Reverse Purchase Program 6. Securities Safekeeping H. Other (Specify.) Yes Yes Yes Yes Yes Yes 1. 2. 3. 4. 5. Yes 2) How are your member capital accounts structured? 3) By size of assets, how many of your members are NPCUs, CCUs, or Others? (Enter the number of your Corporate members in each cell. If none, enter “0”.) >$10M but <$100M >$100M but <$1 Billion 4) What percentage of your various lines of business are from the following Natural Person (or Corporate) Credit Union member size categories? (Enter percent by size for each line of business. Percentages should total to 100%) 5) Has your Natural Person (or Corporate) Credit Union members’ use of services increased (I), stayed the same (S), or decreased (D) since CY1992 , as broken out by member size? (Select a letter for each cell in the table using the following indicators; then enter it in the cell. I = INCREASED use of the service S = SAME use of the service as before 1992 D= DECREASED use of the service NA = Not applicable) 6) What do you think are the reasons for any changes in your members’ use of your services? 7) How many CUSOs does your corporate credit union have a stake in? (Check one.) 1. 2. 3. 4. 0 (Skip to Question #8.) 1 2 3 or more 7a) If 1 or more, please complete the following table and continue to question 7b. 7b) How do the services that these CUSOs provide differ from the services the corporate credit union provides? 8) In what ways does your corporate credit union face competition from outside the corporate credit union network? (If no external competition, skip question 8a.) 8a) What are the sources of that competition, and which products services are most affected? 8b) To what extent do you expect competition to continue to increase in the next several years? 9) In what ways have you felt competition from other corporate credit unions increase in the last five years? 10) To what extent do you expect competition from other corporate credit unions to continue to increase in the next several years? 11) What investment authorities do you currently have? (Check all that apply.) 1. 2. 3. 4. 5. 6. 7. 8. Base Base Plus Part I Part II Part III Part IV Part V Other (Specify.) 12) Do you plan on asking for more authorities in the next two years? If yes, then which ones? (Check all that apply.) 1. 2. 3. 4. 5. 6. 7. 8. Base Base Plus Part I Part II Part III Part IV Part V Other (Specify.) 13) What percent of your investments are made through U.S. Central? = Percent of investments with U.S. Central. 14) Which of the following categories best describes your planned use of U.S. Central for investment purposes in the next two years? 1. 2. 3. 4. To do Less with U.S. Central To do the Same with U.S. Central To do More with U.S. Central Do not currently invest in U.S. Central AND do not plan to do so in next two years 15) Which regulatory changes – including changes to Part 704 – have had the most impact on your corporate credit union? 15a) Please describe how those changes have impacted your corporate credit union’s operations? 16) Please describe the effect of NCUA’s more risk-based approach of corporate credit union examination on your corporate credit union? 17) Do you have a national field of membership? Check one.) 1. 2. 18) How many NPCU members are within your Corporate field of membership in each of the following states? (If other than “0”, enter actual number of member NPCUs in each state.) 19) What are the primary challenges facing your corporate credit union as it attempts to maintain and/or grow its membership in the next two years? 20) What is your corporate credit union doing to address these challenges? 21) What is your corporate credit union’s current merger plans? 1. 2. 3. ______________________ 4. 5. In discussions to merge with another corporate credit union Likely to merge with another corporate credit union in the next two years Will consider a merger with another corporate credit union in the next two years Will not consider a merger in the next two years Uncertain 21a) If you checked either box 1, 2, or 3 in the previous question (#21), please describe the factors that contribute to your corporate credit union’s decision to consider a merger? 22) What are the three major challenges currently facing the corporate credit union industry as a whole? 1. 2. 3. 23) Please indicate the name, title, and phone number of the person(s) who was mainly responsible for filling out this questionnaire. Phone number: 24) Please indicate the name, title, and phone number of the person GAO staff should contact if GAO has follow up questions. 25) Would you like to discuss any of the above questions in further depth, or anything else related to our review of the corporate credit union industry, with GAO? 1. 2. If YES, please provide name, title, and phone number of the person to contact to schedule a follow-up conversation. Thank you for completing the Survey (Save your completed survey as an MS Word document and send it as an attachment to May Lee at Leem@gao.gov) As noted in appendix III, we distributed a questionnaire to the entire network of corporates. This appendix provides responses to the majority of questions posed in the questionnaire (see questions 1, 3-5, 7-14, 16-17, and 19-22). This information was analyzed in the aggregate to prevent specific responses from being associated with an individual institution. The information included in this appendix is based on the responses of 31 corporates, unless otherwise indicated. Products and Services Corporate Credit Unions Offer offered? to do so in the five years? two years? 1. ATM Cards (Issuing) 2. ATM Settlements/Terminal Driving 4. Debit Cards (Issuing) B. Check Services 2. Check Loss Reduction 3. Money Order Settlement 4. Share Draft Processing 5. Statement Prep and Mailing 7. Travelers Check Settlement C. Correspondence Services 1. Coin and Currency 2. Federal Reserve Settlement 4. Foreign Check Collection 5. Foreign Currency Conversion D. Education and Training 3. Rates/Events Updates/News E. E-Services 4. Electronic Bill Payment 5. Funds/Wire Transfers (In & Out) 7. Website Design and Management 1. Account Management & Statements 2. Amortizing Certificates (ACP) 6. Corporate Checking Accounts offered? to do so in the five years? two years? 11. Members’ Business Accounts 12. Members’ Capital Accounts 13. Money Market Accounts 14. Open-End and Term Credits 15. Overnight/Cash Management Accounts 17. Share Certificates (all kinds) G. Miscellaneous 1. Cash Letter Credit 2. CU Service Settlement 3. CUNA Service Group Settlement 4. CUNA Mutual Services 5. Reverse Purchase Program 6. Securities Safekeeping H. Other (Specify.) 2. ATM/Pay Card Services 3. CDs 4. Consulting Services 8. Fraud Protection Products 15. Payment and Technology Products 17. Share Draft Electronic Image Exchange 18. Transit Return Processing Responses displayed are as reported by corporates. I = 23 (74.19%) I = 23 (74.19%) I = 17 (54.84%) I = 23 (74.19%) I = 14 (45.16%) I = 12 (38.71%) I = 10 (32.26%) S = 4 (12.90%) S = 5 (16.13%) S = 7 (22.58%) S = 5 (16.13%) S = 3 (9.68%) S = 5 (16.13%) S = 2 (6.45%) D = 0 (0%) D = 1 (3.23%) D = 5 (16.13%) D = 0 (0%) D = 0 (0%) D = 1 (3.23%) D = 0 (0%) NA = 4 (12.90%) NA = 2 (6.45%) NA = 2 (6.45%) NA = 2 (6.45%) NA = 13 (41.94%) NA = 13 (41.94%) NA = 8 (25.81%) I = 25 (80.65%) I = 24 (77.42%) I = 24 (77.42%) I = 20 (64.52%) I = 25 (80.65%) I = 18 (58.06%) I = 18 (58.06%) S = 2 (6.45%) S = 4 (12.90%) S = 4 (12.90%) S = 5 (16.13%) S = 3 (9.68%) S = 0 (0%) S = 0 (0%) D = 1 (3.23%) D = 3 (9.68%) D = 3 (9.68%) D = 6 (19.35%) D = 2 (6.45%) D = 1 (3.23%) D = 1 (3.23%) NA = 3 (9.68%) NA = 0 (0%) NA = 0 (0%) NA = 0 (0%) NA = 1 (3.23%) NA = 12 (38.71%) NA = 12 (38.71%) I = 24 (77.42%) I = 23 (74.19%) I = 19 (61.29%) I = 23 (74.19%) I = 16 (51.61%) I = 17 (54.84%) I = 11 (35.48%) S = 3 (9.68%) S = 5 (16..13%) S = 6 (19.35%) S = 5 (16.13%) S = 1 (3.23%) S = 2 (6.45%) S = 3 (9.68%) D = 1 (3.23%) D = 3 (9.68%) D = 6 (19.35%) D = 2 (6.45%) D = 1 (3.23%) D = 1 (3.23%) D = 0 (0%) NA = 3 (9.68%) NA = 0 (0%) NA = 0 (0%) NA = 1 (3.23%) NA = 11 (35.48%) NA = 11 (35.48%) NA = 6 (19.35%) I = 17 (54.84%) I = 19 (61.29%) I = 15 (48.39%) I = 19 (61.29%) I = 11 (35.48%) I = 8 (25.81%) I = 5 (16.13%) S = 4 (12.90%) S = 4 (12.90%) S = 5 (16.13%) S = 3 (9.68%) S = 4 (12.90%) S = 7 (22.58%) S = 5 (16.13%) D = 0 (0%) D = 1 (3.23%) D = 4 (12.90%) D = 1 (3.23%) D = 0 (0%) D = 0 (0%) D = 0 (0%) NA = 10 (32.26%) NA = 7 (22.58%) NA = 7 (22.58%) NA = 8 (25.81%) NA = 15 (48.39%) NA = 16 (51.61%) NA = 10 (32.26%) I = increased, S = stayed the same, or D = decrease since calendar year 1992. N/A = not applicable. CU eArchive Solutions (CUeas) CU National Item Capture (CUNIC) Item processing for share drafts/check (CUFSLP) Carolina CU Services, Inc. Credit Union Direct Lending (CUDL) CSC II, Inc. CU Business Partners (CUBP) CU Investment Solutions, Inc. CU West Mortgage (CUWM) Merchant services and entry into loans, deposits, payroll, commercial credit analysis Member Trade Advisory Services, LLC ALM educational, analytical, and advisory Member Trade Financial Services, LLC Mid-States Investment Solutions, Inc. Open Financial Solutions, Inc. Wisconsin CD Shared Service Center This column indicates the total of corporate credit unions that indicated having ownership stakes in the Credit Union Service Organizations. Competition will remain about the same 6.45% Question 11: Current Investment Authorities Held By Corporate Credit Unions These two corporates offered loan participations through a waiver from NCUA. In addition, one of these two respondents has “derivatives vendor status,” as approved by NCUA. Corporate Credit Unions’ Plans to Seek Additional Expanded Authorities in the Next Two Question 13: Percentage Of Corporate Credit Unions’ Investments Made Through U.S. Central Percent calculations did not include U.S. Central, therefore the base is 30 versus 31 corporate credit unions. Continue present level of investment with U.S. Central Less investment with U.S. Central Increase investments with U.S. Central Do not currently invest with U.S. Central and do not plan to do so in next percent exceeds 100 percent because one corporate credit union responded to two response categories and was included in both categories. Positive response to risk-based approach (for example, targeted exams) No discernable impact from risk-based Expressed concerns about targeted exam approachTotal of this column is greater than 100 percent because two respondents expressed concerns in addition to their comments (see table footnote b below). Two respondents expressed concerns in addition to their comments on their reaction to the risk-based approach. Specifically, one respondent had a positive reaction and the other experienced no discernable impact. Currently have national field of Do not currently have national field of One of these corporates, Kentucky Corporate Federal Credit Union, has a regional field of membership; and the other, LICU Corporate Federal Credit Union, facilitates payment and payroll processing for a league of IBM credit unions. Question 21a: Contributing Factors to Consider Mergers Benefits to corporate and members Increases in membership Asset quality and increasesOthernext two years. The corporate credit union network has consolidated since 1992, with asset concentration rising moderately. As corporates’ investments have grown, their composition has changed, with relatively more emphasis on privately issued mortgage-related and asset-backed securities and a shift toward more variable-rate investments. Concurrent with net income ratios, interest-related income and expense ratios have declined recently. In recent years, natural person credit unions have invested less in corporates. Since 1992, the corporate system has consolidated, a change primarily driven by mergers. This consolidation trend has resulted in a moderate increase in asset concentration. For more detailed, year-by-year information, see the table and figures below. As noted earlier, investments, which are the vast majority of corporates’ assets, have grown since 1992, but recently the percentage of corporates’ investments in U.S. Central has declined somewhat and corporates have moved relatively more of their investments into privately issued mortgage- related and asset-backed securities. We made this determination using call reports and other data (for more information on our methodology see app. I). Since there were significant changes to NCUA’s call reports in 1997, in the transition from Form 5300 to Form 5310, some account codes were not available previously and thus could not be disaggregated. In general, corporates’ investments in mortgage-backed securities (including mortgage pass-throughs, collateralized mortgage obligations, and real estate mortgage investment conduits) as a percentage of total investments declined from the mid-1990s through 1998 in the wake of the Cap Corp failure, which was largely driven by ineffective interest-rate risk management for collateralized mortgage obligations. However, since 1998, corporates steadily have been increasing their investments in mortgage- backed securities. In addition, corporates have been shifting more of their investments in mortgage-related and asset-backed securities to variable- rate securities, a move that tends to lessen interest-rate risk. In particular, while 41.7 percent of corporates’ asset-backed securities were classified as fixed-rate at the end of 1997, by the end of 2003 this proportion stood at 18.0 percent. The trend in collateralized mortgage obligations and real estate mortgage investment conduits (REMIC) has been similar, with a relatively greater proportion now classified as variable rate. Table 3 offers additional details of corporates’ investments in U.S. Central, privately issued mortgage-related securities, and asset-backed securities, from 1997 through 2003. Concurrent with the recent low-interest rate environment, corporates’ interest-related income and expenses, relative to average assets, have declined, as illustrated in figure 10. Net interest income, total noninterest income, and operating expense ratios cycled from 1993 through 2003, generally expanding from 1995 through 2000 and then contracting through 2003. Recently, net interest income and operating expense ratios were lower and total noninterest income ratios were higher, suggesting that fee income has become more important for corporates. Net interest income as a percentage of average assets is often referred to as net interest margin. A corporate can maximize its net interest margin by effectively allocating resources among earning and nonearning assets, maintaining low levels of nonperforming assets, providing adequate funding through the lowest cost mix of funds, and maintaining a strong capital position. In a volatile interest-rate environment, large changes in a corporate’s net interest margin are associated with high-interest-rate risk exposures and weak risk management. Net interest income, which is interest income minus interest expense, is normally the primary source of income for a corporate and a key indicator of earnings performance and stability. Interest income consists of interest earned on loans and investments. The major contributor to interest income within a corporate is normally the investment portfolio. Interest expense consists of the corporate’s cost of funding operations, or simply its “cost of funds.” Interest expense in a corporate is realized through dividends on shares, share certificates, member capital accounts, and interest on borrowings (for example, loans and commercial paper). As illustrated in figure 11, interest income and expense have narrowed significantly since 2000. Natural person credit unions’ investments in corporates, which include membership capital, paid-in capital, and other investments, actually were lower at the end of 2003 than at the end of 1998—both in amount ($37.8 billion versus $29.1 billion) and as a percentage of investments (30.4 percent versus 18.1 percent). The smallest natural person credit unions (those with assets of less than $100 million) consistently invested more in corporates, as a percentage of their total investments, from 1998 through 2003. It is important to note that this measure does not include cash on deposit in corporates, since these data were not disaggregated from deposits in other financial institutions in the Form 5310 report until 2003. At the end of 2003, natural person credit unions reported $26.2 billion in cash on deposit at corporates, which represented over three-quarters of natural person credit unions’ total cash on deposit. Corporates held $55.3 billion, or 26.9 percent, of the total amount of natural person credit unions’ cash on hand, cash on deposit, and investments at the end of 2003, with the smallest natural person credit unions (those with assets of less than $100 million) holding around 34 percent and the largest (those with assets in excess of $1 billion) holding around 23 percent of their total in corporates. While it cannot be confirmed given the available data, the growth in natural person credit unions’ loans, coupled with the possibility that natural person credit unions have become more willing to invest their funds directly rather than through corporates, may have resulted in relatively less funds flowing from natural person credit unions into corporates. U.S. Central Credit Union (U.S. Central) is a nonprofit cooperative that is owned by corporates, and it serves these member-owners much like corporates serve their natural person credit union members. Trends in U.S. Central’s balance sheet and income statement suggest that its financial condition has been similar to other corporates, with greater profitability and slightly higher capital ratios. The balance sheet of U.S. Central grew overall from 1992 through 2003. However, as with the corporates, the dynamics of its asset and share growth have varied as the use of U.S. Central by its member-owners has varied. Investments, the vast majority of U.S. Central’s assets, have mirrored the general growth pattern of its assets, declining through the early to mid-1990s and rising thereafter. Recently, U.S. Central has moved relatively more of its investments into privately issued mortgage-related securities. Overall, total assets and shares of U.S. Central have grown since 1992; after generally declining in the early to mid-1990s, by the end of 2003 they were reported at $35 billion and $30.7 billion, respectively (see fig. 12). U.S. Central’s balance sheet is primarily influenced by the balance sheet dynamics of its underlying corporate member-owners, which have varied since 1992. As noted earlier, corporates saw their assets and shares decline in the early to mid-1990s but then rebound; corporates’ assets and shares both grew by over 80 percent from 2000 through 2003. While displaying a similar cyclical trend from 1992 through 2003, U.S. Central did not experience the same degree of growth from 2000, as assets grew by around 54 percent and shares grew by around 57 percent. As with corporates, in general investments represent over 90 percent of U.S. Central’s assets, as illustrated in figure 13. Investments in accounts at U.S. Central, including overnight accounts, term certificates, structured products, and membership shares, are important to many corporates, especially the smaller ones. As of the end of 2003, 17, or 57 percent of all corporates, had at least 70 percent of their total investments in accounts at U.S. Central and 4 had over 60 percent. With the recent investment environment characterized by historic low interest rates, U.S. Central’s members may have increased their utilization of U.S. Central’s economies of scale to help increase the spreads between the rates they offered their customers and the rates they earned on their investments. In general, it seems sensible for corporates—especially the smaller ones—to be able to rely on the services of U.S. Central given its economies of scale. This reliance, however, adds more weight to the need for U.S. Central to be a safe and sound investment. As U.S. Central’s total investments have grown, the composition of these investments has changed, particularly with increases in investments in mortgage-related securities since 1997 (see table 4). U.S. Central’s investments in privately issued mortgage-related securities increased from 3.4 percent of its total investments in 1997 to 24.7 percent of its total investments in 2003. Overall, U.S. Central’s mortgage-related investments, including government and agency mortgage-related issues, rose from 10.5 percent of its total investments to 32.7 percent of its total investments over this period. As with corporates, asset-backed securities have consistently been an important investment type for U.S. Central. Holding variable-rate investments and securities with shorter weighted average lives tends to result in relatively lower interest-rate risk. U.S. Central, like the corporates, tends to have significant holdings of mortgage- related issues and asset-backed securities—80 percent of its portfolio was in such investments at the end of 2003—but holds most of these in the form of variable-rate and shorter weighted average life issues. Holdings of variable-rate asset-backed and privately issued mortgage-related securities accounted for 67 percent of all investments at the end of 2003. According to its 2003 annual report, at the end of the year, mortgage-related and asset- backed securities in U.S. Central’s portfolio had weighted average lives of approximately 2.8 years and 3 years, respectively, and approximately 83 percent of interest-earning assets were set to reprice or mature within 1 year. Table 5 details selected investments of U.S. Central from 1997 through 2003. Since 1998, U.S. Central’s capital has generally been rising. Total capital, as defined in Part 704, rose from $1.2 billion in 1998 to $2 billion at the end of 2003. Figure 14 illustrates the growth in U.S. Central’s total capital. Retained earnings and membership capital have grown overall, but paid-in capital has remained constant since 1999. Since 1998, undivided earnings (a component of retained earnings) have provided the fastest growth, increasing 61 percent, while membership capital, the largest component, has grown 37 percent. At the end of 2003, membership capital accounted for 58 percent, or $1.1 billion, of U.S. Central’s total capital. Despite recent asset growth, U.S. Central’s capital ratios have remained relatively stable, as shown in figure 15. After peaking in 2000, capital ratios declined in 2001 but have since leveled off. They remain above the current regulatory requirements. U.S. Central’s net income has grown since 1992 and was at its highest level at the end of 2003. As depicted in figure 16, after declining to $10.7 million at the end of 1994, U.S. Central’s net income rebounded, generally rising through 1998. After peaking in 1998 at $38.4 million, net income declined to $22.8 million at the end of 2000. However, by the end of 2003, net income had tripled to $67.9 million. U.S. Central’s profitability—that is, net income divided by average assets—followed the general pattern exhibited by net income since 1992, and it was at its highest at the end of 2003. As with the corporates, U.S. Central witnessed a narrowing of its yields on investments recently. However, while profitability suffered at the corporates since 2001, U.S. Central managed to increase its profitability, in part through increased noninterest income. In 1998 NCUA revised Part 704. Among other things, the new regulations provided qualified corporates with expanded authorities that allowed corporates having a strong financial position, management, and infrastructure to exercise greater flexibility in managing their risks subject to NCUA approval. For example, corporates with certain levels of expanded authorities were allowed to invest in foreign securities or A- rated securities, compared with the higher-rated AAA securities in which other corporates were allowed to invest. In 2002, NCUA again revised Part 704 to allow for further flexibilities in expanded investment authorities. For example, qualified corporates were allowed to invest in BBB rated securities, subject to NCUA approval. Table 6 provides more detail on the types of investments allowed under the various levels of expanded authorities, and the number of corporates that currently have these authorities. In addition to those named in the body of this report, the following individuals made key contributions: William Chatlos, May Lee, John Lord, Alexandra Martin-Arseneau, Kimberley Mcgatlin, José R. Peña, Julie Phillips, Mitch Rachlis, Barbara Roesmann, Paul Thompson, and Richard Vagnoni. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Thousands of credit unions have placed about $55 billion of their excess funds in corporate credit unions (corporates). In a three-tiered system, corporates provide lending, investment, and processing services for their member credit unions. Problems with investments in the past prompted regulatory changes that required higher capitalization and stricter risk management, but allowed for expanded investment authorities. GAO assessed (1) the changes in financial condition of the corporate network and (2) the oversight of corporates by the National Credit Union Administration (NCUA), the federal regulator of credit unions. Corporates face an increasingly challenging business environment that potentially could stress their overall financial condition. In response to the competitive environment, corporates are offering new and more sophisticated products and services, expanding their use of technology, and seeking opportunities to merge or collaborate with other corporates. The corporates' financial condition as measured by profitability and capital ratios remained close to a range that has prevailed since the mid-1990s. However, since 2000, a large influx of deposits, coupled with low returns on traditional corporate investments, has constrained earnings and caused a downward trend in corporates' overall profitability. To generate earnings, corporates increasingly have targeted more sophisticated and potentially riskier investments, but appear to be managing risk by shifting toward more variable-rate and shorter-term investments, providing a potentially better match for the relatively short-term nature of their members' deposits. However, the corporates' changing business environment and utilization of more sophisticated and riskier investments increases the importance of NCUA regularly assessing its oversight processes to ensure that corporates are properly managing these risks. NCUA has strengthened its oversight of corporates by creating a centralized office for oversight, revising regulations, implementing risk-focused supervision, and hiring specialists. However, NCUA faces challenges in identifying networkwide problems on a consistent basis, using specialists effectively, providing relevant guidance on mergers, and assuring the quality of corporates' internal controls. Although NCUA identified deficiencies during its examinations, it has not systematically tracked their resolution or evaluated trends in examination data, which could help anticipate emerging issues facing corporates. NCUA also did not fully consider all risks when allocating resources or assigning specialists to examinations, leading to NCUA overlooking some information system deficiencies. Although corporates continue to consider mergers to remain competitive, NCUA had not developed adequate guidance for submitting and reviewing merger proposals. Finally, NCUA has not ensured that corporates' internal controls have remained consistent with those of similarly sized financial institutions
Substantial numbers of ground combat Army and Marine Corps servicemembers are exposed to combat experiences often associated with an increased risk of developing PTSD or other mental health conditions. Specifically, according to a 2004 study, more than half of Army or Marine Corps ground combat units in OEF or OIF report being shot at or receiving small-arms fire, seeing dead or seriously wounded Americans, or seeing ill or injured women or children who they were unable to help. More than half of Marine Corps servicemembers and almost half of Army servicemembers reported killing an enemy combatant in OIF. In addition to certain types of experiences, multiple deployments are also associated with mental health problems. For example, a 2006 Army mental health advisory team report found that Army servicemembers who had been deployed more than once were more likely to screen positive for PTSD, depression, or anxiety than those deployed only once. In a 2008 Army mental health advisory team report, 27 percent of Army male non- commissioned officers in their third or fourth deployment screened positive for PTSD, depression, or anxiety (compared to 12 percent of those on their first deployment). Servicemembers are also exposed to events such as blasts that increase their risk of experiencing a TBI. TBI occurs when a sudden trauma causes damage to the brain and can result in loss of consciousness, confusion, dizziness, trouble with concentration or memory, and seizures. Of particular concern are the after-effects of a mild TBI that may not have resulted in readily apparent symptoms at the time of the injury. A recent study found that mild TBI was associated with high combat intensity and multiple exposures to explosions in combat. Identification of mild TBI is important, as treatment has been shown to mitigate the injury’s effects, which can include difficulty returning to work or completing routine daily activities. DVBIC has issued a screening tool called the Military Acute Concussion Evaluation (MACE), which is based on a screening tool widely used in sports medicine and is intended to evaluate a servicemember within 48 hours of the suspected injury. In June 2007, the Army required health care providers to document a servicemember’s blast exposure in theater using the MACE. DVBIC also issued in December 2006 a CPG for the management of mild TBI in theater. The guidance contains a structured series of questions that include certain “red flags,” such as worsening headaches or slurred speech, that should trigger further evaluation for a possible mild TBI. Treatments for mild TBI may include education, medication, and physical and psychiatric therapy. There are multiple opportunities during the deployment cycle for screening and assessing servicemembers’ health status. Specifically, DOD requires three health assessments during the deployment cycle: the pre- deployment health assessment, the PDHA, and the PDHRA. In addition, DOD requires an annual periodic health assessment (PHA). These assessments and their associated forms are described in Table 1. DOD’s Instruction on Deployment Health, which implements policies and prescribes procedures for deployment health activities, requires deploying servicemembers to complete the pre-deployment health assessment form, the DD 2795, within 60 days prior to the expected deployment date. The DD 2795 is a brief form for servicemembers to self-report general health information in order to identify any health concerns that may limit deployment or need to be addressed prior to deployment, and consists of eight questions that each servicemember must complete (see fig. 1). DOD’s Instruction on Deployment Health states that after the servicemember completes the DD 2795, the form is to be reviewed by a health care provider, who can be a nurse, medical technician, medic, or corpsman. If the servicemember indicates a positive, or “yes,” response to any one of certain questions (2, 3, 4, 7, or 8) the servicemember is to be referred for an interview by a trained health care provider such as a physician, physician assistant, nurse practitioner, or advanced practice nurse. The provider signs the form indicating whether the individual is medically ready for deployment, and a copy of the DD 2795 is placed in the servicemember’s deployment health record. The deployment health record is a summary of the medical record that is to accompany the servicemember into theater. According to DOD’s Instruction on Deployment Health, this record should also contain a record of the servicemember’s blood type, allergies, corrective lens prescription, immunization record, and a summary sheet listing past and current medical conditions, screening tests, and prescriptions. DOD’s Instruction on Deployment Health requires servicemembers returning from deployment to complete the post-deployment health assessment form, the DD 2796, within 30 days of leaving a combat theater or within 30 days of returning to home or a processing station. The DD 2796 is a form for servicemembers to self-report health concerns commonly associated with deployments. In January 2008, DOD released a new version of the DD 2796 that contains screening questions related to mental health, including questions used to screen for depression, suicidal thoughts, and PTSD. The screening questions for depression, suicidal thoughts, and alcohol abuse are more detailed on the new form than on the previous version of the DD 2796 (See appendix I for a copy of the new version of the DD 2796). The DD 2796 must be reviewed, completed, and signed by a health care provider. According to DOD’s Instruction on Deployment Health, the health care provider conducting the assessment must be a physician, physician assistant, nurse practitioner, advanced practice nurse, independent duty medical technician or IDC, or Special Forces medical sergeant. According to DOD’s Instruction on Deployment Health, the health care provider review is to take place in a face-to-face interview with the servicemember. The health care provider is to review the completed DD 2796 to identify any responses that may indicate a need for further medical evaluation. In addition, the new DD 2796 contains guidance intended to assist a provider in determining whether to make a referral for some mental health concerns. For example, the form prompts the provider to conduct a risk assessment for suicide depending on the servicemember’s response to the suicide risk questions. Health care providers use a section of the DD 2796 to indicate when a servicemember needs a referral. The referral field specifies both the concern for which the servicemember is being referred, such as depression or PTSD symptoms, and the type of care or provider to whom the servicemember is being referred, such as primary care, mental health, specialty care, family support services, chaplains, or Military OneSource. DOD requires that the DD 2796 be placed in the servicemember’s medical record. DOD requires an annual health assessment, the PHA, for all servicemembers. The PHA is designed to ensure servicemember medical readiness through monitoring servicemember health status and helps DOD provide preventive care, information, counseling, or treatment if necessary. In February 2006, DOD required the military services to begin administering the PHA, which includes servicemember self-reporting of health status, conditions, treatments, and medications; provider review of the medical record and identification of and referral for any health issues. The PHA also includes efforts to identify and manage preventive needs, occupational risk and exposure as well as identifying and recommending a plan to manage risks. DOD requires its providers to record the results of the PHA in servicemembers’ medical records. DOD has created an online tool to capture self-reported information from the PHA. A draft of this form contains several mental health questions, including PTSD and depression screening questions that are similar to the current PTSD and depression questions on the DD 2796. While several DOD information systems contain servicemember medical information, the Composite Health Care System (CHCS) I and the Armed Forces Health Longitudinal Technology Application (AHLTA), formerly known as CHCS II, are the two electronic medical records systems generally used by DOD health care providers to make PDHA referrals. Although the military services currently employ both systems, there are several differences between the two. For example, CHCS I is a localized system, meaning information contained within CHCS I is only available to medical facilities on a particular military installation; information is not available to military treatment facilities (MTFs) on other installations. In contrast, information in AHLTA is available to medical facilities at different installations and to providers in theater. Another distinction is that CHCS I sends health care providers an email alert when a servicemember they refer makes, completes, or cancels an appointment. If servicemembers do not make appointments within 30 days, their referral is terminated from CHCS I and the health care provider is notified by email. AHLTA does not have this capability. DOD has been expanding AHLTA’s capabilities and plans on replacing certain CHCS I functions, such as laboratory tests, with AHLTA. DOD has taken steps to meet the 2007 NDAA requirements for pre- deployment mental health standards and screening. As required by the 2007 NDAA, which was enacted in October 2006, DOD issued minimum mental health standards that servicemembers must meet in order to be deployed. In a policy issued in November 2006, DOD identified mental health disorders that would preclude a servicemember’s deployment, including conditions such as bipolar disorder. DOD’s policy also identified psychotropic medications that would limit or preclude deployment if used by servicemembers—including antipsychotic or anticonvulsant medications used to control bipolar symptoms and certain types of tranquilizers and stimulant medications. In addition to identifying the mental health conditions and medications that would preclude deployment, DOD’s policy specified the circumstances under which servicemembers with other mental health conditions can be deployed. Specifically, according to DOD’s policy, when a servicemember has been diagnosed with a mental health condition that does not preclude deployment, the servicemember should be free of “significant” symptoms associated with this condition for at least three months before he or she can be deployed. The policy also states that in making a deployability assessment, health care providers should consider the environmental and physical stresses of the deployment and whether continued treatment will be available in theater. Finally, the policy identified the pre-deployment health assessment as a mechanism for screening servicemembers for mental health conditions and for ensuring that the standards are utilized in making deployment determinations. The 2007 NDAA also required DOD to use the pre-deployment health assessment to identify those who are under treatment or have taken psychotropic medications for a mental health condition. The pre- deployment health assessment form, the DD 2795, includes a question asking servicemembers whether they have sought mental health counseling or mental health care in the past year. In a July 2007 report to Congress, DOD cited the pre-deployment health assessment in describing its implementation of the 2007 NDAA requirements for pre-deployment screening. The report also identified a medical record review as a component of the pre-deployment health assessment process to help meet these mental health screening requirements. According to a senior DOD official, because servicemembers may be reluctant to disclose symptoms or treatment that may prevent them from deploying, the provider review of the medical record should be done to verify the self-reported information on the DD 2795. While medical records are an important part in making deployment determinations, DOD’s deployment policies are not consistent with respect to their review. DOD’s November 2006 policy on minimum mental health standards for deployment states that the pre-deployment health assessment includes a medical record review as part of ensuring the standards are utilized, and DOD officials confirmed that the policy requires such a review. However, DOD’s August 2006 Instruction on Deployment Health, which implements policies and prescribes procedures for deployment health activities, is silent on whether a review of medical records is required as part of the pre-deployment health assessment. This Instruction states only that the pre-deployment health assessment form, DD 2795, must be completed by each deploying servicemember and the responses reviewed by a health care provider. A health care provider following DOD’s Instruction may not conduct the medical record review during the pre-deployment health assessment required by DOD’s policy on minimum mental health standards for deployment. Because of DOD’s inconsistent policies, providers determining if OEF and OIF servicemembers meet DOD’s minimum mental health deployment standards may not have complete medical information. During our site visits, we found that practices varied with respect to pre- deployment mental health screening, and medical records were not routinely reviewed at the time of the pre-deployment health assessment by the provider reviewing the DD 2795. While a review of medical records can serve to validate information reported by servicemembers, the health care providers we spoke with during our site visits were unaware that it was required as part of the pre-deployment health assessment. At all three installations we visited, servicemembers completed the DD 2795 form. At two of the three installations all servicemembers were interviewed by a health care provider to review their responses on the DD 2795 and discuss any additional health concerns. At the third installation, providers interviewed servicemembers if they indicated any concerns on the DD 2795. While the deployment health record was available to providers at all three installations, the medical record was routinely reviewed by the provider at only one of the three installations during the pre-deployment health assessment. At the other two installations, providers told us the record was reviewed only if servicemembers identified concerns on the DD 2795 or during the interview. Health care providers at Fort Campbell and Camp Lejeune manually track whether servicemembers who receive mental heath referrals from the PDHA make or keep appointments for evaluations with mental health providers. DOD does not require that individual referrals from the PDHA be tracked; however, DOD has a quality assurance program that monitors the PDHA, including follow-up encounters. In addition, because Guard and Reserve servicemembers generally receive civilian care, which they do not have to disclose, and because servicemembers may be reluctant to disclose mental health encounters due to stigma concerns, Guard and Reserve referrals are difficult to track. While DOD health care providers generally make PDHA referrals using one of two DOD information technology systems, AHLTA or CHCS I, health care providers at military installations we visited have developed different manual systems to track whether referred servicemembers made or kept appointments with mental health providers. DOD does not require these referrals to be tracked. However, a Fort Campbell health care provider we spoke with said that the health care providers who make referrals from the PDHA may not have an ongoing relationship with the referred servicemembers and, therefore, manual systems have been created to track whether referred servicemembers completed their evaluations. According to installation health care providers, manually tracking referrals is labor-intensive and time-consuming, and necessary to ensure that referred servicemembers receive their evaluations. We found that health care providers at Fort Campbell and Camp Lejeune have developed manual tracking systems to ensure that servicemembers receive evaluations. At Fort Campbell, the installation’s readiness processing manager, who is the health care provider who tracks PDHA referrals, created an Access database for this purpose. The manager checks CHCS I, the information technology system Fort Campbell healthcare providers use to make PDHA referrals, daily to obtain their status. Then, this individual manually enters the status of each referral into the Access database, which allows all PDHA referrals and their status to be viewed in one list. Servicemembers who fail to make or keep their appointments are contacted, and if a servicemember does not respond after two follow-up attempts, the unit commander is informed. At Camp Lejeune, health care providers track division servicemembers’ PDHA mental health referrals to the division psychiatrist using hard-copy logbooks. Because the division psychiatrist’s clinic does not have access to AHLTA or CHCS I, health care providers make referrals by phoning the division psychiatrist and follow-up with the psychiatrist every two weeks to track whether servicemembers kept their appointments. Camp Lejeune officials told us that, unlike the division, the air wing’s and logistics group’s PDHA mental health referral tracking is facilitated by having greater access to AHLTA, which allows providers to check the status of appointments scheduled at the MTF. We found that mental health PDHA referrals for Marine Reserve members who complete the PDHA at Camp Lejeune are tracked manually. Officials from the Marine Reserves’ Deployment Support Group (DSG) at Camp Lejeune inform the home units of Reserve member referrals and track their status. According to a Fort Campbell health care provider, Army Reserve members are not processed through Fort Campbell following deployment and, therefore, do not complete the PDHA at this installation. According to Guard and Reserve officials, home units rely largely on servicemembers to disclose whether they receive care from a mental health provider. Tracking PDHA mental health referrals is challenging for the Guard and Reserves because their members generally receive civilian care. Military health care providers would be unaware of civilian care unless disclosed by the Guard and Reserve member. In addition, Military OneSource, which is operated by a vendor contracted by DOD, guarantees that it will not release the identity of servicemembers who receive counseling unless servicemembers are at risk of harming themselves or others. As a result, PDHA mental health referral tracking is challenging for Guard and Reserve units due to their reliance on servicemembers to disclose mental health encounters with civilian providers, which Guard and Reserve officials told us they may be reluctant to do because of stigma concerns. While DOD policy allows several types of health care providers to conduct the PDHA, health care providers at Fort Campbell and Camp Lejeune told us that health care providers actually conducting the assessments are generally physicians, physician assistants, or, in the case of the Marine Corps, IDCs. According to installation health care providers, most of the physicians conducting the assessments have specialties in primary care, which includes the specialties of family practice and internal medicine. The health care providers conducting these health assessments receive varying levels of training in mental health issues based on provider type during their basic medical education. For example, physician assistants complete a rotation in psychiatry and may elect an additional psychiatry rotation, while IDCs receive training in psychiatric disorders as part of a unit on medical diagnosis and treatment that covers several types of medical conditions. Physicians receive mental health training in medical school. DOD provides several types of guidance for health care providers to help them conduct mental health assessments and decide whether to make referrals for further evaluation. DOD maintains a Web site that contains CPGs and other guidance and training that can be accessed by health care providers conducting the assessments. For example, DOD provides a set of reference materials on the Web site that contains information on and steps to assess servicemembers for PTSD and major depressive disorder. According to DOD, hard copy versions of these reference materials were distributed to MTFs beginning in July 2004, and MTFs may order additional copies. We found that health care providers conducting the PDHA had varying familiarity with the CPGs and levels of comfort in conducting assessments. For example, at Camp Lejeune, some of the physicians and IDCs we interviewed about DOD’s guidance were not familiar with the CPGs for depression and PTSD. Some physicians and IDCs cited resource constraints, in the form of limited access to computers and internet connectivity, as barriers to accessing these CPGs posted on the Web site. At Fort Campbell, a brigade surgeon we spoke to who supervises providers conducting the PDHA said that these providers have varying knowledge of the CPGs. He stated that the guidance is distributed to email accounts that some health care providers may not check regularly. In addition, health care providers varied in their level of comfort in making mental health assessments. At Camp Lejeune, eight of the 15 physicians and IDCs we interviewed were comfortable making mental health assessments, while the remaining seven were less comfortable making these assessments and expressed interest in receiving more training on making mental health assessments. At Fort Campbell, the division mental health providers we spoke with stated that while physician assistants, for example, could identify a servicemember with mental health concerns, these providers were generally not comfortable in assessing servicemembers for mental health issues. DOD and the military services have implemented and are in the process of implementing several new mental health training initiatives. DOD created the Center of Excellence for Psychological Health and Traumatic Brain Injury in November 2007 that will focus on research, education, and training related to mental health. According to DOD, the Center will develop and distribute a core mental health curriculum for health care providers, as well as implement policies to direct training in the curriculum across the services. DOD plans to begin training primary care providers in July 2008. The Army has created a program, RESPECT-MIL, that trains primary care providers in identifying and treating servicemembers with depression and PTSD. By the end of 2008, the Army plans to train providers at 15 installations. The Army also directed all servicemembers, including health care providers, to participate in a training program that includes information on PTSD by October 18, 2007. The training focused on the causes and physical and psychological effects of PTSD and provided information on how to seek subsequent treatment for this condition. As of January 31, 2008, 93 percent of Army servicemembers had received the training. The Army also requires commanders to include PTSD awareness and response training in pre- and post-deployment briefings. The Marine Corps has a training program for non-mental health providers, including those that conduct the PDHA, that includes training on PTSD. This training began in January 2008 and is scheduled to train 669 health care providers at 12 sites by August 2008. The Marine Corps also requires pre- and post-deployment briefings on identifying and managing combat stress for all Marine Corps servicemembers and unit leaders. In response to the 2007 NDAA, DOD added TBI screening questions to the PDHA in January 2008 and plans in July 2008 to begin screening all servicemembers prior to deployment. Prior to these TBI screening efforts required by DOD, several installations had already implemented efforts to screen servicemembers before or after their deployments. To help health care providers screen servicemembers for mild TBI and issue referrals, DOD has issued guidance and provided various forms of training. In response to the 2007 NDAA requirement for pre-and post-deployment screening for TBI, DOD has added TBI screening questions to the PDHA, and plans to require screening of all servicemembers beginning in July 2008 for mild TBI prior to deployment. These screening questions are similar to the screening questions on the PDHA. The questions are included in a cognitive assessment tool that will provide a baseline of cognitive function in areas such as memory and reaction time. In January 2008, DOD released a new version of the post-deployment health assessment form, the DD 2796, that contains screening questions for TBI (See appendix I for a copy of the new version of the DD 2796). The TBI screening questions added to the PDHA are designed to be completed by the servicemember in four series. The sequence of questions specifically assesses (a) events that may have increased the risk of a TBI, (b) immediate symptoms following the event, (c) new or worsening symptoms following the event, and (d) current symptoms. (See appendix I.) If there is a positive response to any question in the first series, the servicemember completes the second and third series; if there is a positive response to any question in the third series, the servicemember completes the fourth series about current symptoms. The DD 2796 directs the health care provider to refer the servicemember based on the servicemember’s current symptoms. See figure 2 for a description of these screening questions. DOD is planning to require screening of all servicemembers for mild TBI prior to deployment using questions similar to those on the PDHA. This screening is planned to begin in July 2008 and these screening questions are included in a cognitive assessment, the Automated Neuropsychological Assessment Metrics (ANAM). The ANAM will provide a baseline assessment of cognitive function in areas such as memory and reaction time, which may be affected by a mild TBI. If a servicemember experiences an event in theater, the ANAM can be administered again and the differences in function assessed. Because the ANAM does not distinguish between impairments in cognitive function caused by events such as blasts and those caused by other factors such as fatigue, the ANAM needs to be used with screening questions to identify the event that may have caused a TBI. However, the ANAM can be used to identify changes in baseline cognitive function that may warrant further evaluation. According to an Army official, since August 2007 about 50,000 Army servicemembers have been assessed using the ANAM. Prior to DOD’s plans to screen all servicemembers on the PDHA and prior to deployment, several installations had implemented, as early as 2000, initiatives for mild TBI screening to be used before or after units from those locations deployed. Generally, servicemembers participating in these initiatives are screened using a three-question screen developed by the DVBIC called the Brief Traumatic Brain Injury Screen (BTBIS). The BTBIS is designed to identify servicemembers who may have had a mild TBI, and includes questions about events and symptoms that are similar to those used on DOD’s PDHA. The first of these initiatives began at Fort Bragg, North Carolina in 2000. Since then, Fort Carson, Colorado; Fort Irwin, California; Fort McCoy, Wisconsin and Camp Pendleton, California have initiated screening for mild TBI either pre-deployment, post- deployment, or both. A DVBIC official told us that these initiatives would probably be replaced by the DOD-wide screening. DOD issued guidance for health care providers on the identification of mild TBI, trained some health care providers on identifying mild TBI, and plans additional health care provider training initiatives. In October 2007 DOD released guidance on identifying mild TBI for providers screening, assessing, and treating servicemembers outside the combat theater. The guidance contains information to help health care providers conducting the PDHA, including follow-up questions that the provider can ask a servicemember based on the servicemember’s responses to the TBI screening questions on the PDHA. The guidance contains structured series of questions that include certain “red flags,” such as double vision or confusion, that suggest a need for referral for further evaluation for a possible mild TBI. The guidance recommends assessments and treatments for servicemembers with symptoms such as irritability and includes screening tools to help health care providers assess the severity of these symptoms. According to a DOD official, DOD also plans to provide the military services with guidance on using the new TBI screening questions on the PDHA. In addition to issuing guidance, DOD and the military services also trained health care providers on identifying possible mild TBI. In September 2007 DOD held a tri-service conference in which more than 800 health care providers were trained. According to DVBIC officials, DVBIC staff provide training through workshops for health care providers at its 14 sites and travel to other installations to train health care providers. In addition, DOD’s planned Defense Center of Excellence for Psychological Health and Traumatic Brain Injury, which began initial operations on November 30, 2007, and is expected to be fully functional by October 2009, will develop a national collaborative network to advance and disseminate TBI knowledge, enhance clinical and management approaches, and facilitate services for those dealing with TBI, according to DOD. According to Army officials, the Army is also initiating several health care provider training efforts for the summer of 2008 designed to train primary care providers on mild TBI. According to these officials, primary care providers are generally uncomfortable with treating mild TBI, preferring instead to refer these cases to specialty care. The Marine Corps’ training program for non-mental health care providers, including those conducting the PDHA, also includes material on diagnosing mild TBI. With respect to the ANAM, DVBIC officials told us that wherever this assessment tool is used, DVBIC officials and officials responsible for the implementation of the ANAM train health care providers in its use. DOD has taken positive steps to implement provisions of the 2007 NDAA related to screening servicemembers for TBI and mental health. For example, DOD has added mild TBI screening to its PDHA and will require screening prior to deployment. With respect to mental health, we found that health care providers’ familiarity with DOD’s CPGs and comfort in making mental health assessments varied. However, DOD and the military services have implemented or are implementing training initiatives, some of which are specifically aimed at the primary care providers who generally conduct the PDHA. Furthermore, the installations we visited had developed manual systems for tracking those servicemembers who were referred from the PDHA to ensure that they made or completed their appointments. Referral tracking is difficult for the Guard and Reserves because their servicemembers generally receive civilian care. DOD has taken steps to meet 2007 NDAA requirements related to mental health standards and screening, including issuing a policy on minimum mental health standards for deployment. A key component of DOD’s efforts to meet these requirements is a review of medical records, and we agree that this should be done to verify information in a screening process that depends on self-reported information. Unfortunately, DOD’s policies for reviewing medical records during the pre-deployment health assessment are inconsistent. During our site visits we found that health care providers were unaware a medical record review was required and that medical records were not always reviewed by providers conducting the pre-deployment health assessment. A health care provider following DOD’s Instruction on Deployment Health, which is silent on whether medical record review is required during the pre-deployment health assessment, may not conduct the medical record review required by DOD’s policy on minimum mental health standards for deployment. Until DOD resolves the inconsistency between its policies, its health care providers may not have complete mental health information when screening servicemembers prior to deployment. In order to address the inconsistency in DOD’s policies related to the review of medical record information and to assure that health care providers have reviewed the medical record when screening servicemembers prior to deployment, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to revise DOD’s Instruction on Deployment Health to require a review of medical records as part of the pre-deployment health assessment. In commenting on a draft of this report, DOD stated that our concerns regarding provider review of medical records are well-taken and that an assessment is only complete when it includes a medical record review. While DOD concurred with our recommendation and said that it will update its Instruction on Deployment Health to require a medical record review at the time of the pre-deployment health assessment, DOD is limiting this medical record review requirement to servicemembers who have a significant change in health status since their most recent periodic health assessment. According to a senior DOD health official, it is anticipated that the updated Instruction will be published in one year. However, DOD does not explain how providers will be able to identify the subset of servicemembers who have had a significant change in health status. As a result, its response does not fully eliminate the inconsistency between its policy and current Instruction. To fully eliminate the inconsistency, as we recommended, DOD should require a medical record review for all servicemembers as part of the pre-deployment health assessment in its updated Instruction. We also encourage DOD to update its Instruction as quickly as possible so that providers have the complete information that we and DOD agree they need to make pre-deployment decisions. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and appropriate congressional committees and addressees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact named above, Marcia Mann, Assistant Director; Eric Anderson; Krister Friday; Lori Fritz; Adrienne Griffin; Amanda Pusey; and Jessica Cobert Smith made key contributions to this report. VA Health Care: Mild Traumatic Brain Injury Screening and Evaluation Implemented for OEF/OIF Veterans, but Challenges Remain. GAO-08-276. Washington, D.C.: February 8, 2008. VA and DOD Health Care: Administration of DOD’s Post-Deployment Health Reassessment to National Guard and Reserve Servicemembers and VA’s Interaction with DOD. GAO-08-181R. Washington, D.C.: January 25, 2008. Defense Health Care: Comprehensive Oversight Framework Needed to Help Ensure Effective Implementation of a Deployment Health Quality Assurance Program. GAO-07-831. Washington, D.C.: June 22, 2007. Post-Traumatic Stress Disorder: DOD Needs to Identify the Factors Its Providers Use to Make Mental Health Evaluation Referrals for Servicemembers. GAO-06-397. Washington, D.C.: May 11, 2006. Military Personnel: Top Management Attention Is Needed to Address Long-standing Problems with Determining Medical and Physical Fitness of the Reserve Force. GAO-06-105. Washington, D.C.: October 27, 2005. Defense Health Care: Occupational and Environmental Health Surveillance Conducted during Deployments Needs Improvement. GAO-05-903T. Washington, D.C.: July 19, 2005. Defense Health Care: Improvements Needed in Occupational and Environmental Health Surveillance during Deployments to Address Immediate and Long-term Health Issues. GAO-05-632. Washington, D.C.: July 14, 2005. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. Defense Health Care: Force Health Protection and Surveillance Policy Compliance Was Mixed, but Appears Better for Recent Deployments. GAO-05-120. Washington, D.C.: November 12, 2004. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004.
The John Warner National Defense Authorization Act for Fiscal Year 2007 included provisions regarding mental health concerns and traumatic brain injury (TBI). GAO addressed these issues as required by the Act. In this report GAO discusses (1) DOD efforts to implement pre-deployment mental health screening; (2) how post-deployment mental health referrals are tracked; and (3) screening requirements for mild TBI. GAO selected the Army, Marine Corps, and Army National Guard for the review. GAO reviewed documents and interviewed DOD officials and conducted site visits to three military installations where the pre-deployment health assessment was being conducted. DOD has taken positive steps to implement mental health standards for deployment and pre-deployment mental health screening. However, DOD's policies for providers to review medical records are inconsistent. DOD issued minimum mental health standards that servicemembers must meet in order to be deployed to a combat theater and identified the pre-deployment health assessment as a mechanism for ensuring their use in making deployment decisions. DOD's November 2006 policy implementing these deployment standards requires a review of servicemember medical records during the pre-deployment health assessment. However, DOD's August 2006 Instruction on Deployment Health, which implements policy and prescribes procedures for conducting pre-deployment health assessments, is silent on whether such a review is required. Because of this inconsistency, providers determining if Operation Enduring Freedom and Operation Iraqi Freedom servicemembers meet DOD's mental health deployment standards may not have complete medical information. Health care providers at the installations GAO visited where the post-deployment health assessment (PDHA) is conducted manually track whether servicemembers who receive mental health referrals from the PDHA make or complete appointments with mental heath providers. Because health care providers conducting the PDHA and making referrals from the PDHA may not have an ongoing relationship with referred servicemembers, health care providers responsible for tracking referrals at these installations have developed manual systems to track servicemembers to ensure that they made or kept their appointments for evaluations. Tracking is more challenging for Guard and Reserve units because their servicemembers generally receive civilian care. Guard and Reserve units do not know if servicemembers used civilian care to complete their PDHA referrals unless disclosed by the servicemembers, which they may be reluctant to do because of stigma concerns. DOD is addressing the TBI requirement through implementing screening for mild TBI in its PDHA and prior to deployment. DOD has also provided guidance and training for health care providers. DOD in January 2008 added TBI screening to the PDHA, and plans to require screening of all servicemembers for mild TBI prior to deployment beginning in July 2008. The TBI screening questions on the PDHA assess the servicemember's exposure to events that may have increased the risk of a TBI and the servicemember's symptoms. The TBI screening questions to be used prior to deployment are similar to those on the PDHA. Prior to DOD's screening efforts, several installations had been screening servicemembers for mild TBI before or after deployment. An official from the Defense and Veterans Brain Injury Center told GAO that these initiatives would probably be replaced by the DOD-wide screening.
Hospitals’ budgets for medical devices and other goods are substantial. Many hospitals buy medical devices and other supplies through GPOs, which are generally owned by member hospitals and vary in size and scope of services. GPOs are expected to use volume purchasing as leverage in negotiating prices with vendors. In exchange for administrative services and the ability to sell through a GPO to its member hospitals, vendors pay administrative fees to a GPO based on the hospitals’ purchases made using that GPO’s contract. These fees, sanctioned under Medicare law, cover the GPO’s costs; GPOs often distribute surplus fees to their owners. Federal antitrust guidelines help a GPO determine whether its business practices and market share are likely to be questioned as anticompetitive by enforcement agencies. According to an American Hospital Association (AHA) survey, roughly 4,900 nonfederal community hospitals spent an estimated $173 billion on nonlabor supplies, services, and capital in 2000. A significant share of hospitals’ nonlabor costs include such goods as pharmaceuticals and medical devices. Hospitals buy these goods through their own purchasing departments, and many hospitals—in addition to contracting on their own with vendors—use GPO-negotiated contracts for at least some of their purchasing. Some hospitals have large or more sophisticated purchasing operations, but even hospitals belonging to large chains or health systems often do at least some purchasing through a GPO. The proportion of hospitals belonging to at least one GPO is substantial: estimates range from 68 percent to 98 percent. Medical devices that hospitals buy span a wide array of products, such as pacemakers, implantable defibrillators, and infusion pumps. Some device manufacturers are small companies that offer one product or a few closely related products while others are large firms that offer many, often unrelated, products. The Medical Device Manufacturers Association estimates that some devices become obsolete within 2 to 3 years—when the next generation of a particular device becomes available. Manufacturers market medical devices in medical journals and trade shows but place considerable value on having access to clinicians in hospitals as well as to hospital purchasing departments, which make the final buying decisions. According to the Health Industry Group Purchasing Association, hundreds of GPOs operate today, but only about 30 negotiate sizeable contracts on behalf of their members. The emergence of these large GPOs in part stems from GPO mergers in the mid-1990s. Joint ventures and mergers created the two largest GPOs, Novation and Premier, which have annual purchases by member facilities using their contracts of $17.6 billion and $14 billion, respectively. Other GPOs in our pilot study have less than $6 billion in annual purchases by member facilities. (See appendix I for purchasing volumes of GPOs in our pilot study.) In addition to differences in size, GPOs differ in scope. Some negotiate national contracts and offer many services beyond purchasing, such as programs emphasizing the gains in safety and economic value resulting from standardization, or specialized software to help ensure that hospitals are not overcharged. Others serve regional or local hospital markets and provide fewer additional services. GPOs differ in their corporate structures and their relationships with member hospitals. All large GPOs and many smaller GPOs are for-profit entities, some of which are owned by not-for-profit hospitals. Other GPOs have shareholders independent of the member hospitals, which themselves do not necessarily hold an ownership stake. An example of a for-profit GPO owned by not-for-profit hospitals is Premier. Premier is owned by 203 not-for-profit health care organizations that operate approximately 900 hospitals. Other for-profit GPOs are owned by investors that are not member hospitals; for example, InSource is owned by MedAssets, a private purchasing and contract services company. Broadlane’s owners consist of individual investors as well as for-profit and not-for-profit organizations including Tenet Healthcare, a nationwide provider of health care services. Some GPOs are jointly owned. For example, both Novation and Healthcare Purchasing Partners International (HPPI) are owned by the same two networks of hospitals and physicians. Network members purchase using Novation contracts. However, non-network members purchase using HPPI contracts, which are negotiated by Novation. Some GPOs, such as HealthTrust, require that members do not belong to other GPOs. In addition, some GPOs, such as Novation and Amerinet, contract with manufacturers to supply products sold under the GPO’s own “private-label” brand name. (See appendix I for a summary of characteristics of GPOs in our pilot.) According to officials of GPOs and a GPO trade organization, benefits that GPOs provide to member hospitals include, in addition to lower prices, reduced costs due to hospitals being able to reduce the size of purchasing departments, as well as assistance with product-comparison analysis and standardization of products. Benefits that GPOs say they provide to manufacturers with which they contract include, in addition to access to hospital decisionmakers, cost savings due to reducing manufacturers’ contracting, marketing, and sales activities. According to representatives of some manufacturers, many GPOs act as gatekeepers to hospital purchasing decisionmakers and charge the manufacturers administrative fees as the price of access to their member hospitals. In order to sell to hospitals through GPO contracts, vendors generally submit proposals to a GPO—in response to Requests for Proposals (RFP)—that are then evaluated. Based on these evaluations, the GPO enters into negotiations with select vendors to determine prices and, in some cases, administrative fees that vendors pay to the GPO. Hospitals then buy directly from the manufacturer for a price specified in a GPO contract. Often prices through a GPO-negotiated contract vary based on each hospital’s volume of purchases and the extent to which the member hospital delivers on its “commitment” to buy an agreed-upon share of its purchases of a certain product from a particular manufacturer. The more of a product that a hospital purchases, the lower the price per unit it may pay the manufacturer. A hospital’s price may also vary depending upon the share of a product it purchases from a manufacturer. For example, a hospital that buys only 25 percent of its cardiac stents from one manufacturer may pay nearly three times more per stent than one that purchases all its stents from that manufacturer. Member hospitals may have an additional financial incentive to use the GPO contract. The extent to which a hospital buys using the GPO’s contracts may affect the share of the administrative fees that the GPO returns to the hospital. Although GPOs provide services to hospitals and are often organized by hospitals, many finance their operations primarily through the administrative fees paid by manufacturers and other vendors. These fees are typically calculated as a percentage of each hospital’s purchases from a vendor. The Social Security Act, as amended in 1986, allows these fees, which would otherwise be considered ‘kickbacks’ or other illegal payments to the GPO. Regulations establishing appropriate administrative fees, enforced by the Office of Inspector General in the Department of Health and Human Services, state that the fee structure must be disclosed in an agreement between the GPO and each participating member. The agreement must state that fees are to be 3 percent or less of the purchase price, or if not fixed at 3 percent or less, the amount or maximum amount that each vendor will pay. The GPO must also disclose in writing to each member, at least annually, the amount received from each vendor with respect to purchases made by or on behalf of the member. The fees tend to be higher on purchases by hospitals that buy most or all of an item from one vendor. In addition to covering their operating expenses with these fees, GPOs, with the approval of their boards of directors, often distribute surplus fees to member hospitals but may also use administrative fees to finance new ventures, such as electronic commerce, that are outside their core business. (See fig. 1.) The complex financial flows among vendors, GPOs, and hospitals have raised concerns that GPOs’ interests may diverge from those of hospitals. According to some small manufacturers, GPOs have an incentive not to seek the lowest price because higher prices yield higher administrative fees. These manufacturers further suggest that GPOs, by relying on vendors’ fees, become agents of manufacturers and assist them in limiting competition. By contrast, according to some GPOs, they act as an extension of hospitals and GPO members have input into the GPOs’ product selections. GPOs acknowledge that a manufacturer dominant in a product line may contract with a GPO, or agree to a favorable contract, to preserve its market share and exclude competitors. However, GPOs assert that this selective contracting is part of a competitive process allowing the GPO to negotiate lower prices. GPOs also emphasize that participation in a GPO is voluntary, so the GPO must reflect what the hospitals want if it is to retain their business. Recognizing that joint purchasing arrangements among hospitals may enable members to achieve efficiencies that will benefit consumers but may, in some cases, pose risks of harming consumers by reducing competition, DOJ and the Federal Trade Commission (FTC) issued in 1993 a guideline to help GPOs and others gauge whether a particular GPO arrangement is likely to raise antitrust problems. This guideline sets forth an “antitrust safety zone” for GPOs that meet a two-part test, under which the agencies, absent extraordinary circumstances, will not challenge the arrangement as anticompetitive. Essentially, the two-part test is as follows: 1. Purchases through a GPO must account for less than 35 percent of the total sales of the product or service in question (such as pacemakers) in the relevant market. This part of the test addresses whether the GPO accounts for such a large share of the purchases of the product or service that it can effectively exercise increased market power as a buyer. If the GPO’s buying power drives the price of the product or service below competitive levels, consumers could be harmed if suppliers respond by reducing output, quality, or innovation. 2. The cost of purchases through a GPO by each member hospital that competes with other members must amount to less than 20 percent of each hospital’s total revenues. This second part of the test looks at whether the GPO purchases constitute such a large share of the revenues of competing member hospitals that they could result in standardizing the hospitals’ costs enough to make it easier to fix or coordinate prices. However, the guideline states that a purchasing arrangement is not necessarily in violation of the antitrust laws simply because it falls outside the safety zone. Likewise, the guideline suggests that even a purchasing arrangement that falls within the safety zone might still raise antitrust concerns under “extraordinary circumstances.” Each arrangement has to be examined according to its particular facts. In this regard, the guideline also describes factors that reduce antitrust concerns with purchasing arrangements that fall outside the safety zone. GPOs did not always obtain better prices for member hospitals. The advantage or disadvantage of GPO prices varied by the model purchased and size of hospital—but lacked a clear relationship to size of GPO. In our pilot study, we compared median GPO and median non-GPO prices for purchases by hospitals and found the following: Among hospitals of all sizes, hospitals using GPO-negotiated contracts to buy pacemakers and safety needles often paid more than hospitals negotiating on their own. This finding also held for hospitals using large GPOs, compared to hospitals negotiating on their own. Between hospitals of different sizes, small and medium-sized hospitals buying pacemakers were more likely than large hospitals to save money when using GPO-negotiated contracts. We also compared prices between large GPOs and smaller GPOs: Hospitals of all sizes using a large GPO’s contracts almost always saved money on safety needles but often paid more for pacemakers, compared to those using smaller GPOs’ contracts. Large GPOs would be expected to achieve price savings consistently. In all these comparisons, the price savings or additional cost that hospitals realized—for example, by using a GPO or by negotiating on their own—often varied widely from model to model. Purchasing with GPO contracts did not ensure that hospitals saved money. Among hospitals of all sizes in our study market, those using GPO- negotiated contracts for pacemakers and safety needles often paid more than those negotiating on their own. The median GPO-negotiated price was higher than the median price hospitals paid on their own for all six safety needles models and over three-fifths of the 41 pacemaker models that could be compared. Similarly, the use of a large GPO—one with an annual purchase volume greater than $6 billion—did not guarantee price savings. Hospitals using contracts negotiated by a large GPO paid more than hospitals purchasing on their own for the six safety needle models and roughly half of the 22 pacemaker models that could be compared. The price savings or additional costs that hospitals obtained using GPO- negotiated contracts varied by model. For different safety needle models, median GPO-negotiated prices exceeded prices negotiated by a hospital buying on its own by from 1 percent to 5 percent. For different pacemaker models, the variation was much greater: median GPO-negotiated prices ranged from 26 percent less to 39 percent more than the median price paid by hospitals purchasing on their own. (See fig. 2.) We examined how hospitals of different sizes using GPOs fared relative to their peers purchasing pacemakers on their own and found that whether there were savings depended on the size of the hospital. The 4 small hospitals (those with fewer than 200 beds) always did better with a GPO contract. The 11 medium-sized hospitals (those with 200 to 499 beds) did better with a GPO contract for 40 percent of the models (see fig. 3), and the 3 large hospitals rarely did better with a GPO contract—compared with their respective peers purchasing on their own (see fig. 4). Even though small hospitals buying on their own generally paid higher prices than the small hospitals using GPOs, the GPO-negotiated price was not much lower—from 1 to 6 percent—than what they paid on their own. As figures 3 and 4 show, the range of price savings or additional costs associated with GPO contracts was considerable. For example, for medium-sized hospitals, the median GPO-negotiated price was 39 percent lower for model 1 and 25 percent higher for model 25 than the median price paid by these hospitals purchasing on their own. The size of a GPO was not related consistently to whether a hospital, when using a GPO contract, obtained a better price. Whether use of large GPOs offered price savings varied by type of device: for safety needles, they were more likely to obtain better prices and for pacemakers, they were less likely to do so. Specifically, the median price paid by hospitals using a large GPO’s contract to purchase safety-needles was nearly always lower—for 18 of the 19 types of needles we could compare—than the median price paid by hospitals using a smaller GPO’s contract. For pacemakers, a large GPO’s contract infrequently yielded better prices than smaller GPOs’ contracts— for only 5 of the 18 pacemakers we could compare. In this case, the higher prices associated with most of these pacemaker purchases run counter to the expectation that large GPOs yield substantial price advantages. (See fig. 5.) Figure 5 shows that, as with the previous comparisons, the range of price savings or additional costs associated with large GPOs was wide. For hospitals using large GPOs’ contracts to buy pacemakers, the median price paid ranged from 20 percent less for one model to 26 percent more for another, compared with the median price paid by hospitals using smaller GPOs’ contracts. Regardless of whether a GPO contract was used, hospitals bought pacemakers and safety needles predominantly from large manufacturers. In our study, 5 of the 16 manufacturers from which hospitals purchased were small; however, purchases from these 5 represented a small minority of the models bought (1 of 121 pacemaker models and 22 of 196 safety needle models). Almost all purchases from small manufacturers in our pilot were made by hospitals buying on their own; only one hospital purchased from a small manufacturer using a GPO contract. We could not determine the extent to which hospitals’ reliance on large manufacturers of these two devices reflected hospital preference or the effects of GPOs’ contracting practices, because almost all hospitals in our sample belonged to GPOs. Representatives from small manufacturers whom we interviewed stated that some incentives in GPO contracts penalize hospitals purchasing off-contract. However, hospital personnel whom we interviewed emphasized different factors as influencing their purchasing decisions, including clinical considerations for pacemakers and cost for safety needles. Seventy-one percent of hospitals purchased a pacemaker and 15 percent a safety needle outside of their GPO contracts. While this is a pilot study based on one market, the data raise questions about one of the intended benefits from having large GPOs. In our study market, GPOs of different sizes realized comparable savings for member hospitals. Buying through a large GPO did not guarantee a hospital the lowest prices. In fact, there were several instances in which individual hospitals using a large GPO’s contracts paid prices that were at least 25 percent higher than prices negotiated by hospitals on their own, and smaller GPOs also sometimes offered better prices. Clearly, more evidence on GPOs and their effects is needed, since our data pertain to one urban market, two types of medical devices, eight GPOs, and 18 hospitals. To assist the Subcommittee, we plan to obtain data from a broader array of geographic areas and for other devices, hospitals, and GPOs. Gathering additional information on GPOs’ benefits and possible drawbacks could inform an examination of antitrust policy toward GPOs. For more information regarding this statement, please contact Janet Heinrich at (202) 512-7114 or Jon Ratner at (202) 512-7107. JoAnne R. Bailey, Hannah F. Fein, Kelly L. Klemstine, and Michael L. Rose made key contributions to this statement. The information in this appendix illustrates how GPOs in our study market vary in size, ownership structure, and profit status. The appendix contains information obtained both from GPO Web sites during April 2002 and through telephone interviews. We did not independently verify the information in this appendix. (See table 1.)
This testimony discusses group purchasing organizations (GPO) for medical devices and supplies used in hospitals. By pooling the purchases of their member hospitals, these specialized firms negotiate lower prices from vendors. GAO found that a hospital's use of a GPO contract did not guarantee that the hospital saved money: GPOs' prices were not always lower and were often higher than prices paid by hospitals negotiating directly with vendors. GAO studied price savings with respect to: (1) whether hospitals using GPO contracts received better prices than hospitals that did their own contracting, (2) the size of the hospital, and (3) size of the GPO. This data raises questions about whether GPOs, specially large GPOs, achieve consistent price savings.
About 19,600 communities have joined the flood insurance program. Under the program, flood insurance rate maps (FIRM) were prepared to identify special flood hazard areas. In order for a community to join the program, any structures built within a special flood hazard area after the FIRM was completed were required to be built according to the program’s building standards that are aimed at minimizing flood losses. Special flood hazard areas, also known as the 100-year floodplains, are areas subject to a 1-percent or greater chance of experiencing flooding in a given year. A key component of the program’s building standards, that must be followed by communities participating in the program, is a requirement that the lowest floor of the structure be elevated to or above the base flood level—the elevation at which there is a 1-percent chance of flooding in a given year. To encourage communities to join the program, thereby promoting floodplain management and the widespread purchasing of flood insurance, the Congress authorized FEMA to make subsidized flood insurance rates available to owners of structures built before a community’s FIRM was prepared. These pre-FIRM structures are generally more flood-prone than later built structures because they were not built according to the program’s building standards. Owners of post-FIRM structures pay actuarial rates for national flood insurance. The average annual premium for a subsidized policy is currently $610, and the average annual premium for an actuarial policy is currently $310. The higher average premium for a subsidized policy reflects the significantly greater risk of flood-prone pre- FIRM properties. The $610 average annual premium for a subsidized policy represents about 38 percent of the true risk premium for these properties. From 1968 until the adoption of the Flood Disaster Protection Act of 1973, the purchase of flood insurance was voluntary. The 1973 act required the mandatory purchase of flood insurance to cover structures in special flood hazard areas of communities participating in the program if (1) any federal loans or grants were used to acquire or build the structures and (2) the loans were secured by improved properties and were made by lending institutions regulated by the federal government. The owners of properties with no mortgages or properties with mortgages held by unregulated lenders were not, and still are not, required to buy flood insurance, even if the properties are in special flood hazard areas. The National Flood Insurance Reform Act of 1994 reinforces the objective of using insurance as the preferred mechanism for disaster assistance by (1) expanding the role of federal agency lenders and regulators to enforce the mandatory flood insurance purchase requirements and (2) prohibiting further flood disaster assistance for any property where flood insurance is not maintained, even though flood insurance was mandated as a condition for receiving disaster assistance. Regarding the prohibition on further flood disaster assistance, the act requires borrowers who have received certain disaster assistance and then failed to obtain flood coverage to be barred from receiving future disaster aid. Other forms of flood disaster assistance include low-interest loans from the Small Business Administration to flood victims who are creditworthy. In addition, a flood victim who cannot obtain a Small Business Administration loan may apply for an individual and family FEMA grant of up to $14,400 or the amount of the loss, whichever is less. Annual operating losses or net revenues from the National Flood Insurance Program’s operations have varied significantly from year to year. While revenues exceeded program costs in some years, cumulative program costs exceeded income by about $843 million during the period October 1, 1992, through September 30, 2000. As seen in Figure 1, during the 8-year period from fiscal years 1993 through 2000, the program incurred operating losses in 5 of these years and experienced net income in the 3 remaining years. During fiscal years 1993 through 1998, the first 6 years of the 8-year period, the flood insurance program generally experienced operating losses. This occurred because losses from flood claims were greater than premium income collected from the program’s policyholders. The program’s annual losses during this period ranged from about $600,000 in fiscal year 1998 to $602 million in fiscal year 1993. Cumulative operating losses experienced by the program totaled about $1.56 billion during the 6-year period. To help finance these losses, the Administration borrowed from the U.S. Treasury during the 6-year period. According to FEMA, as of August 31, 1999, the debt owed by the program to the U.S. Treasury totaled $541 million. Since fiscal year 1995, losses experienced by the program annually have gradually declined, and in fiscal years 1999 and 2000 program revenues exceeded program costs by a total of about $720 million. As a result, the Administration was able to repay its debt owed the U.S. Treasury, and, as of June 30, 2001, the program owes no debt to the U.S. Treasury. The financial improvement experienced by the program since fiscal year 1995 was primarily due to three reasons. First, claims and related expenses declined. Second, the number of policyholders covered by the program increased about 31 percent from 3.3 million policies in force in fiscal year 1995 to 4.3 million policies in force by fiscal year 2000. Accordingly, earned premium revenue on these policies increased during the period. Third, according to Administration officials, the proportion of generally more flood-prone pre-FIRM subsidized policies insured by the program has declined, resulting in a less risky portfolio of policies in force. The percentage of program policies that are subsidized has declined over time as newer properties have joined the program and are charged actuarial rates. While 41 percent of the 2.7 million policies in force in fiscal year 1993 were subsidized, 30 percent of the 4.3 million policies in force in fiscal year 2000 were subsidized, according to an Administration official. While the program incurred operating losses during the 8-year period, it should be recognized that the value of the program in reducing federal expenditures on disaster assistance should not be measured by net federal expenditures alone. For example, the Administration estimated that the program’s standards for new construction are now saving about $1 billion annually in flood damage avoided. Also, from October 1, 1968, through September 30, 2000, the program paid about $10 billion in insurance claims, primarily from policyholder premiums that otherwise would, to some extent, have increased taxpayer-funded disaster relief. The program is not actuarially sound because about 30 percent of the 4.3 million policies in force are subsidized, according to an Administration official. For a single-family pre-FIRM property, subsidized rates are available for the first $35,000 of coverage, although any insurance coverage above that amount must be purchased at actuarial rates. Administration officials estimated that total premium income from subsidized policyholders is currently about $500 million less than it would be if these rates had been actuarially based and participation had remained the same. Pre-FIRM structures that are within an identified 100-year floodplain and are covered by subsidized policies are, on average, not as elevated as the post-FIRM structures in comparison with the base flood level. Administration officials told us that, on average, pre-FIRM structures not built to the program’s standards are three and a half to four times more likely to suffer a flood loss. When these structures suffer a loss, the damage sustained is, on average, about 40 percent greater than the damage to flooded post-FIRM structures. According to the Administration, when these two factors are combined, pre-FIRM structures suffer, on average, about five times more damage than post-FIRM structures. As an alternative to actuarial soundness, the Administration developed a financial goal for the program to collect sufficient revenues to at least meet the expected losses and expenses of the average historical loss year, as well as to cover all non-loss-related program expenses, such as the program’s administration. However, the average historical loss year is based only on the program’s experiences since 1978. Since then, no catastrophic year ($5.5 billion to $6 billion in claims losses) has occurred, and many years in the 1980s were characterized by fairly low actual loss levels as compared to the historical average losses experienced in other years. Therefore, the historical average loss year involves fewer losses from claims than the expected annual claims losses in future years. As a result, collecting premiums to meet the historical average loss year does not realize the collections necessary to build reserves for potential catastrophic years in the future. For the program to be actuarially sound, its rate-setting process would have to consider the monetary risk exposure of the program or the dollar value of expected flood losses over the long run. Since the magnitude of flood damage varies considerably from year to year, income from premiums in many years would exceed actual losses. This circumstance would enable the program to build reserves toward a possible catastrophic year in the future. As we reported in March 1994, increasing the premiums charged to subsidized policyholders (thereby decreasing the subsidy) to improve the program’s financial health could have an adverse impact on other federal disaster-related relief costs. Increasing the rates of subsidized policyholders would likely cause some policyholders to cancel their flood insurance, and, if flooded in the future, these people might apply for Small Business Administration loans or FEMA disaster assistance grants. Because they were built before the program’s building standards became applicable, pre-FIRM structures are generally not as elevated as post-FIRM structures, and, if their owners were to be charged true actuarial rates, these rates would be much higher than current subsidized rates. For example, if the subsidy on pre-FIRM structures were eliminated, insurance rates on currently subsidized policies would need to rise, on average, approximately a little more than twofold, according to an Administration official. This increase would result in an annual average premium of about $1,300 for these pre-FIRM structures. Significant rate increases for subsidized policies, including charging actuarial rates, would likely cause some pre-FIRM property owners to cancel their flood insurance. If owners of pre-FIRM structures, which suffer the greatest flood loss, canceled their insurance policies, the federal government would likely face increased costs, as the result of future floods, in the form of low- interest loans from the Small Business Administration or grants from FEMA. The effect on total federal disaster assistance costs of phasing out subsidized rates would depend on the number of the program’s current policyholders who would cancel their policies. Thus, it is difficult to estimate if the increased costs of other federal disaster relief programs would be less than, or more than, the cost of the program’s current subsidy. On the other hand, expanding participation in the program by increasing the rate of compliance with the mandatory purchase requirement, or by extending the mandatory purchase requirement to property owners not now covered, will likely increase the number of both subsidized and unsubsidized policies. Although greater participation in the program is likely to reduce the cost of FEMA grants and Small Business Administration loans, the resulting increase in subsidized policyholders will put greater financial stress on the flood insurance program, because the premiums received from subsidized policyholders are not sufficient to meet the future estimated losses on these policies. Repetitive loss properties have a major disproportionate impact on the National Flood Insurance Program, according to FEMA’s fiscal year 2000 performance report. About 38 percent of all program claims historically (currently about $200 million annually) represent repetitive losses, even though repetitive-loss structures make up a small percentage of all program policies. About 45,000 buildings currently insured under the program have been flooded on more than one occasion and have received flood insurance claims payments of $1,000 or more for each loss. Over the years, the total cost of these multiple-loss properties to the program has been about $3.8 billion. A 1998 study by the National Wildlife Federation noted that repetitive loss properties represent only 2 percent of all properties insured by the program, but they tend to have damage claims that exceed the value of the house and most are concentrated in special flood hazard areas. For example, nearly one out of every ten repetitive loss homes has had cumulative flood loss claims that exceeded the value of the house. Furthermore, over half of all nationwide repetitive loss property insurance payments have been made in Louisiana and Texas. About 15 states account for 90 percent of the total payments made for repetitive loss properties. We, as well as FEMA’s Office of Inspector General, have identified improving the financial condition of the National Flood Insurance Program as one of FEMA’s major management challenges. In our July report on FEMA’s performance under the Government Performance and Results Act, we outlined FEMA’s accomplishments and plans to reduce the losses it sustains from repetitive loss properties. Among other things, FEMA has under way actions or plans aimed at (1) identifying target repetitive loss properties and transferring their servicing to a special servicing facility designed to better oversee claims and coordinate and facilitate insurance and mitigation actions and (2) developing and implementing proposals to reduce the subsidy provided to pre-FIRM repetitive loss properties. In fiscal year 2000, FEMA implemented a repetitive loss initiative to target the 10,000 worst repetitive loss properties, those currently insured properties that had four or more losses, or two to three losses where the cumulative flood insurance claims payments exceeded the building’s value. According to FEMA, the initiative is designed to eliminate or short- circuit the cycle of flooding and rebuilding for properties suffering multiple losses due to flooding. The initiative includes identifying repetitive loss properties and transferring their insurance policies to a central, special servicing facility designed to better oversee claims. FEMA believes that this special servicing will help coordinate insurance activities and mitigation grant programs. FEMA reported that it had identified repetitive loss properties and would make this information available to state and local governments to help them target repetitive loss properties for mitigation actions. FEMA also reported that it planned to mitigate 1,938 target properties over the next 4 years. In addition, in its fiscal year 2002 annual performance plan, FEMA outlined several strategies to reduce the subsidy provided to repetitive loss properties as well as several business improvement process actions to reduce the program’s costs. FEMA stated it would use Flood Mitigation Assistance funds and Hazard Mitigation Grants Program funds in conjunction with flood insurance program funds to acquire properties, relocate residents, or otherwise mitigate future losses. FEMA also plans to provide incentives to communities to reduce repetitive flood losses. In its fiscal year 2002 budget proposal, FEMA requested to transfer $20 million in fees from the National Flood Insurance Program to increase the number of buyouts of properties that suffer repetitive losses. This proposal also includes a proposal for two major reforms to the flood insurance program. FEMA proposes to terminate flood insurance coverage for the worst offending repetitive loss properties. FEMA also proposes to eliminate subsidized premiums for vacation homes, rental properties, and other nonprimary properties that experienced repetitive losses. FEMA estimates these two reforms will generate savings of about $12 million in fiscal year 2002 and additional funds in subsequent years. - - - - - In closing, Madam Chairwoman, the Administration is helping the nation avoid the costs of flood damage through the premiums it collects from, and the claim payments it makes to, program policyholders as well as the building standards it has promoted for new construction that minimize flood damage. However, at times, heavy flooding has produced annual flood insurance losses that exceeded the premiums collected from policyholders. As a result, the program has had to borrow funds from the U.S. Treasury to cover its operating losses, which it subsequently repaid. Two major factors underlie these financial difficulties—the program, by design, is not actuarially sound and it experiences repetitive losses. These factors are not easy to overcome because they have been an integral part of the program since its inception, and they are related to the promotion of floodplain management and widespread purchasing of flood insurance. Madam Chairwoman, this completes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee might have. For further information on this testimony, please contact Mr. Stanley Czerwinski at (202) 512-2834. Mark Abraham, Martha Chow, Kerry Hawranek, Signora May, Lisa Moore, and Robert Procaccini made key contributions to this testimony. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01-832, July 9, 2001). Flood Insurance: Emerging Opportunity to Better Measure Certain Results of the National Flood Insurance Program (GAO-01-736T, May 16, 2001). Disaster Assistance: Issues Related to the Development of FEMA’s Insurance Requirements (GAO/GGD/OGC-00-62, Feb. 25, 2000). Flood Insurance: Information on Financial Aspects of the National Flood Insurance Program (GAO/T-RCED-00-23, Oct. 27, 1999). Flood Insurance: Information on Financial Aspects of the National Flood Insurance Program (GAO/T-RCED-99-280, Aug. 25, 1999). Disaster Assistance: Opportunities to Improve Cost-Effectiveness Determinations for Mitigation Grants (GAO/RCED-99-236, Aug. 4, 1999). Disaster Assistance: FEMA Can Improve Its Cost-Effectiveness Determinations for Mitigation Grants (GAO/T-RCED-99-274, Aug. 4, 1999). Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/RCED-96-113, May 23, 1996). Flood Insurance: Financial Resources May Not Be Sufficient to Meet Future Expected Losses (GAO/RCED-94-80, Mar. 21, 1994).
Floods have been, and continue to be, the most destructive natural hazard in terms of economic loss to the nation, according to the Federal Emergency Management Agency. From fiscal years 1969 through 2000, the National Flood Insurance Program--a major federal effort to provide flood disaster assistance paid about $10 billion in insurance claims, primarily from premiums collected from program policy holders. This testimony discusses (1) the financial results of the program's operations since fiscal year 1993, (2) the actuarial soundness of the program, and (3) the impact of repetitive losses and FEMA's strategies for reducing those losses.
The EHCY program is the key federal education program targeted to homeless children and youth, and, in school year 2011-12, more than 1.1 million were enrolled in our nation’s public schools, according to Education data. For purposes of the program, a homeless child or youth is one who lacks a fixed, regular, and adequate nighttime residence. This includes children who: are sharing the housing of others due to loss of housing, economic hardship, or a similar reason (commonly referred to as “doubled-up”); are living in motels, hotels, trailer parks, or camping grounds due to the lack of alternative adequate accommodations; are living in emergency or transitional shelters; are abandoned in hospitals; or are awaiting foster care placement; have a primary nighttime residence that is a public or private place not designed for or ordinarily used as a regular sleeping accommodation for human beings; are living in cars, parks, public spaces, abandoned buildings, substandard housing, bus or train stations, or similar settings; and are migratory children who qualify as homeless due to their living circumstances, as described above. Education’s Office of Student Achievement and School Accountability (SASA)—within the Office of Elementary and Secondary Education— provides EHCY formula grants to states, which must comply with certain requirements. For example, the McKinney-Vento Act requires each state that receives funds to establish an Office of Coordinator for Education of Homeless Children and Youths, with responsibilities that The state plan describes, among include carrying out the state plan.other things, procedures that will be used to identify homeless children and youth, strategies to address challenges, such as enrollment delays, and a demonstration of the state’s efforts to review and revise policies to remove barriers to the enrollment and retention of homeless children and youth. Among other responsibilities, state educational agencies report data to Education on the educational needs of their homeless students; provide technical assistance to school districts and monitor their compliance with the program; and facilitate collaboration between the state and other service providers that serve homeless children and youth and their families. States are generally required to award no less than 75 percent of their grant to school districts on a competitive basis. Grants to school districts, awarded for a period of up to 3 years, are to be used for activities that facilitate the enrollment, attendance, and success of homeless children and youth in school. Grants to districts are to be awarded based on the need of the school district for assistance and the quality of the application submitted. In determining which districts to fund, states are required to consider certain factors, such as the needs of homeless children and youth enrolled in the school district, the types of services to be provided under the program, and the extent to which the services will be coordinated with other services available to homeless children and youth. Districts are authorized to use these funds to support a range of activities for homeless students, such as tutoring, transportation, and referrals to health care services, as well as to provide professional development for educators and support coordination between schools and other agencies. According to Education data, in school year 2011-12, fewer than a quarter of school districts nationwide (3,531 out of 16,064) received EHCY program funds; these districts enrolled 68 percent of the homeless students identified that year. Education also allows states to use a regional approach to award their competitive grants. Through such an approach, according to the National Center for Homeless Education (NCHE), a state may provide funds to established regional educational entities, geographic clusters of school districts defined by the state, clusters self-selected by neighboring school districts, or some combination of these approaches. According to Education’s survey of states in school year 2010-11, the most recent survey data available, 16 states reported that they provided funds through an intermediate educational agency or consortia. The “school of origin” is the school that the child or youth attended when permanently housed or the school in which the child or youth was last enrolled. 42 U.S.C. § 11432(g)(3)(G). In determining the best interest of a child or youth, a district must, to the extent feasible, keep a homeless child or youth in the school of origin unless doing so is contrary to the wishes of the child or youth’s parent or guardian. 42 U.S.C. § 11432(g)(3)(A)-(B). immediately enroll in the selected school, even without the records, such as proof of residency, typically required; and receive services comparable to those offered to other students for which they are eligible, such as transportation, educational services, and free school meals. Other programs administered by Education and other federal agencies, many of which receive more federal funding than the EHCY program, may also support the needs of homeless children and youth (see table 1 for selected examples). For example, under Title I, Part A of the Elementary and Secondary Education Act of 1965, as amended (ESEA), school districts are required to set aside funds as necessary to provide comparable services for homeless students who do not attend Title I schools. In addition, grantee school districts are required to coordinate with other organizations serving homeless children and youth, including those operating programs funded under the Runaway and Homeless Youth Act. The populations served by these programs vary, and for some programs, eligibility for services does not depend on being homeless. Among the programs that do target homeless populations, some use definitions of homelessness that are different from the one used by the EHCY program. The McKinney-Vento Act also created the U.S. Interagency Council on Homelessness (USICH), which currently consists of 19 federal cabinet secretaries and agency heads. The Homeless Emergency Assistance and Rapid Transition to Housing (HEARTH) Act of 2009 established as the mission of USICH to coordinate the federal response to homelessness and create a national partnership at every level of government and with the private sector to reduce and end homelessness. To identify homeless students, officials in a majority of school districts (13 of 20) said their districts systematically requested some information on the housing status of new students using a form during the enrollment Two of these districts also requested housing information from process.every student every year to help identify eligible students whose housing circumstances may have changed following enrollment. According to the National Center for Homeless Education (NCHE), using a questionnaire at enrollment can help school districts increase their identification of homeless students (see fig. 1). To identify students who were not identified at enrollment or whose housing status changed during the year, homeless liaisons in the 20 school districts we interviewed also relied on referrals from school staff or other service providers (see fig. 2). students who provide home addresses of shelters or other students; students who request transportation changes; students with attendance problems or who are tardy; students who fall asleep in class; and students with dropping grades, hunger, or hygiene issues. Officials we interviewed also discussed the importance of training staff to identify homeless students, as identifying currently enrolled students is less systematic than identifying new students at enrollment. Despite ongoing efforts to identify homeless students through housing surveys and referrals, officials in 8 out of the 20 districts we interviewed noted a problem with the under-identification of homeless students. All four state EHCY program coordinators we interviewed acknowledged that school districts face financial disincentives to identifying homeless children and youth due to the cost of services districts must provide. Officials from both grantee and non-grantee districts reported facing significant challenges in identifying all eligible students. Officials in 11 of the 20 districts we interviewed said they identified increasing numbers of homeless children and youth in recent years, and many noted the frequent mobility of some students, making it challenging for school staff to keep track of their homeless students. Officials in 4 of the 20 districts said it was challenging to identify children and youth living doubled-up with others, particularly where living with extended family may be a cultural norm. In such cases, families may not consider themselves to be homeless or may be unaware of their rights to receive services under the program. For example, officials in one district told us that many of the district’s immigrant families who work for a local meat processor live with relatives in doubled-up conditions but do not consider themselves homeless. To help clarify program eligibility, one district we reviewed used its enrollment form to ask parents or guardians to check a box when they are “living ‘doubled-up’ due to economic emergency, not to save money or for cultural preference.” In one school district we reviewed, the enrollment form asks parents or guardians to check a box “if the student is homeless or living in temporary/transitional housing,” and asks whether the student is an “unaccompanied youth,” but does not describe eligible housing situations. As a result, a student who is living doubled-up due to economic circumstances may not be identified, even though this situation represents a majority of homeless students nationwide (see fig. 3). Nationally, according to Education’s survey of grantee school districts in school year 2010-11, homeless liaisons spent a median amount of nearly 2 hours per week on their responsibilities for the EHCY program. Many liaisons said that they juggled multiple responsibilities, including duties outside of the EHCY program. Officials in 7 of the 20 districts cited their limited availability to provide training and outreach or the lack of sufficiently trained school personnel as a challenge to identifying eligible students. In one state we visited the state EHCY program coordinator who surveyed homeless liaisons on the amount of time they spent on the EHCY program said she has found a link between the amount of time liaisons spend on the EHCY program and the number of homeless students they have identified. One liaison said there is confusion around “awaiting foster care placement,” a term used to describe some children and youth eligible for the EHCY program. Specifically, the liaison told us that she would receive requests for services for children in foster care, rather than for children awaiting foster care placement.among the district’s many military families, the term “transition,” sometimes used to describe eligible families, has a meaning unrelated to homelessness. Officials in 7 of the 20 districts said that stigma around being identified as homeless makes it challenging to identify eligible children and youth. One formerly homeless student we interviewed described her initial identification as very traumatic because she did not want anyone at school to know about her housing status and did not want to be called homeless. She said she would rather have received bad grades in high school than have her teachers know of her situation. Another said that she did not want anyone to know she was homeless but found it difficult to hide when she had to resort to “couch-surfing” in the homes of many different people. Some officials we spoke with suggested the districts under-identify unaccompanied homeless youth, including lesbian, gay, bisexual, transgender, or questioning (LGBTQ) youth who tend to be overrepresented in the homeless youth population. Officials in five districts told us they are involved in efforts targeted to LGBTQ youth. For example, an official in one of these districts said the district has been involved in focus groups and worked with service providers on how to better identify and serve these youth who may have separated from their families due to their LGBTQ identification. Officials in 5 of 20 districts told us that some families fear being known to government entities, such as child protective services, police, and immigration services. For example, officials in one district said parents may be afraid that the child protective services agency will remove their children from their custody if they are discovered to be homeless. Natural disasters, including floods and a hurricane that forced many families from their homes, also presented challenges for some districts we reviewed. To identify the large numbers of students displaced simultaneously in these districts, officials working in four of five districts we interviewed about their experiences with natural disasters relied heavily on staff and outside resources (e.g., shelters and community meetings). However, officials said that factors such as families’ unwillingness to self-identify and frequent mobility following the disaster created challenges to identifying and serving students. For example, officials in a district affected by heavy flooding said that immediately following the flooding, families moved in with relatives or other families and did not necessarily consider themselves to be homeless. However, as time passed, the district had to identify families in another wave as families again were displaced from where they initially had moved. Under-identification can negatively affect homeless students’ ability to access needed services and to succeed in school. For example, according to a formerly homeless youth we interviewed, prior to being identified as homeless during his final year of school, he had been receiving Fs on his report card. After being identified, he received a bus pass, school supplies, and a laptop—necessary supplies to help him graduate. Although he faced significant pressures outside of school, according to a school official, and challenges completing his schoolwork, he graduated and was able to obtain employment. Another unaccompanied homeless youth said that after he was identified as homeless he received assistance with school work and clothing for an internship, and was very happy to be able to stay at the same high school. Some officials we interviewed noted that children and youth not connected to school or other services can be particularly challenging to identify, leaving them most at risk of failing to receive necessary services. Students experiencing homelessness have diverse needs, ranging from school-related to more basic needs. Nationally, grantee school districts in school year 2010-11 most frequently reported transportation to and from school, family or student preoccupation with survival needs, and frequent mobility from school to school as among homeless students’ greatest barriers to school enrollment, attendance, and success, respectively, Some officials according to Education’s survey of these school districts.we interviewed cited similar barriers. For example, a high school principal said that the most pressing need for homeless students in his school is for a safe place to sleep, and students who are worried about where they will sleep at night will not be focused on their studies. Some of the homeless youth we spoke with also indicated that stable housing or food were among their significant needs. Specifically, one unaccompanied homeless youth we interviewed—who has lived with a series of family members since leaving her mother’s home—said that while she currently sleeps on the couch in her uncle’s house, she worries every day about her next move as she does not know how long she will be allowed to stay. To help address the needs of homeless students, grantee school districts in school year 2010-11 reported providing a variety of services, according to Education’s survey (see fig. 4). Officials we interviewed said their districts provided eligible students with transportation to and from school, as required by the McKinney-Vento Act, through means such as public transit, district buses, taxi cabs, gas assistance and mileage reimbursements. Officials in 9 of the 20 districts also said they collaborated with neighboring districts to transport homeless students across district boundaries, such as by splitting transportation costs or dividing responsibility for routes to and from school. The Consolidated Appropriations Act, 2014 authorized school districts to use ESEA Title I funds to provide transportation for homeless students to their schools of origin. Consolidated Appropriations Act, 2014, Pub. L. No. 113-76, div. H, tit. III, 128 Stat. 5, 388 (2013). school that the family had requested. While district officials did not think this was in the best interest of the child, officials said they did not think they could refuse to provide transportation due to the distance. Some district officials we spoke with also said coordinating transportation across districts can be difficult. For example, billing and being billed by other districts can be inefficient and problematic. A district transportation official we interviewed noted the significant amount of administrative work it takes to arrange a student’s transportation. For example, she said she could spend an entire day trying to figure out how to free up a bus driver to transport one homeless student to and from school when she also needs to worry about the transportation of another 500 students. Officials in the districts we interviewed coordinated with other education programs and districts to help address homeless students’ needs. For example, some districts used ESEA Title I, Part A funds to provide students with tutoring, uniforms, and funds for class fees, as well as hygiene kits, clothing and laundry vouchers, among other things. Officials in 7 of the 20 districts said these funds also supported salaries for positions that support homeless students, such as EHCY program staff, social workers and homeless student advocates. In addition, officials in some districts said they have coordinated with staff from special education programs to help facilitate services for homeless students; migrant education programs to help identify homeless migrant children; and in some instances, preschool programs to serve homeless preschool-aged children. Officials in 5 of the 20 districts said their preschool programs reserved spaces for homeless preschool-aged children or prioritized these students for services, and in four of these districts, officials said the district transported homeless preschoolers to and from school. Some officials also described coordinating with other districts. For example, officials in three districts said they coordinated with other districts to help homeless students obtain partial credit for work completed elsewhere but said that such efforts could be challenging. In one extraordinary case, we spoke with an official who said he worked for months to obtain a student’s transcript from an African refugee camp to be able to eventually verify the student had received 33 credits, exceeding the 22 necessary to graduate. According to a formerly homeless youth we interviewed, receiving credit for work completed was very important to help her meet graduation requirements as she moved across the country twice during high school and attended a total of four high schools. Officials in 11 of the 20 districts we interviewed cited challenges due to limited staff availability or resources with which to serve homeless students and adequately address their needs. For example, a school principal in a K-8 school we visited without full-time counselors or support staff said homeless students there have access to about 30 percent of the support services the school would like to offer them. In another high school we visited, officials said there is a severe lack of resources at the school to support homeless students. Specifically, according to officials, teachers currently fund a “food closet” on nearly every floor of the high school to meet the needs of hungry students. To expand their ability to serve homeless students’ needs, district and school officials also leveraged community resources to provide students with access to additional services and supports. For example, some officials we interviewed said that they referred homeless students or their families to organizations and agencies that provide health and mental health services, as well as to other community organizations, such as food pantries and shelters for students’ pressing needs. Officials in 6 of the 20 districts reported they assisted homeless youth in accessing public benefits, such as for health care or food. Officials in some districts also said they referred students to other federal programs, such as Head Start and Runaway and Homeless Youth (RHY) programs, and housing programs for services. In one district we visited officials said they improved referrals for services by developing trainings for an interagency committee comprised of various community stakeholders on resources and services available to homeless students. However, officials said that a lack of community resources affected their ability to meet student needs. For example, officials in a number of school districts we interviewed (9 of 20) said the lack of available or affordable housing options made it challenging to address the needs of homeless children and their families. School districts also played an important role in addressing the needs of students displaced by natural disasters and maintaining their school stability. For example, in the immediate aftermath, officials we interviewed in one of five districts about their experiences with natural disasters said three schools were used as emergency shelters for families. Officials in four districts said they received a great deal of community support through donations—such as mattresses, heaters, food, clothing and supplies—to help address the significant needs of displaced students and families. However, officials also reported challenges in serving large numbers of newly identified homeless students, sometimes without additional funds for services. For example, one district without EHCY grant funds spent $750,000 to transport displaced students to and from school and to pay tuition to other districts where some students began attending school, according to officials. Maintaining contact with families through their moves was also a challenge for some officials we interviewed. In one district, officials said they were unaware of the number of students who have not returned to their school of origin or who plan to relocate permanently to where they went after the storm. In addition, officials in another district discussed the need for counseling following the trauma that students faced in fleeing and coping with a natural disaster. Homelessness Following a Natural Disaster Three families we interviewed who were displaced from their homes following a hurricane—two of whom moved at least twice in the months following—told us of the important role their children’s school played in helping to address their many needs. In addition to transportation and school supplies, the school provided homeless families with free before- and after-school care for students, food, gift cards, and clothing—some of which were donated by other organizations, businesses, and school districts, within and outside the state. According to school officials, families were also offered counseling services, though meeting students’ needs for counseling has been a challenge. Officials described how children faced significant trauma following the storm. For example, one student carried a sibling on her back to escape rising floodwaters. The federal EHCY program manager collaborates with officials from other Education programs, federal agencies, and states (see table 2). These collaborative efforts are designed to share information with other programs likely to serve homeless students and increase awareness about the EHCY program and the rights of, and services available to, homeless students, among other things. In addition, the EHCY program manager provides training and technical assistance to state program coordinators. GAO has previously found that collaboration is essential for increasing the efficiency and effectiveness of federal programs and activities in areas where multiple agencies or programs have similar goals, engage in similar activities, or target similar beneficiaries. State EHCY program coordinators collaborate with other state agencies, service providers and school districts to improve services to homeless children and youth. The McKinney-Vento Act requires state EHCY coordinators to facilitate coordination between the state educational agency and other state agencies and to collaborate with school district homeless liaisons, service providers, and community organizations, among others. According to Education data from its survey of state EHCY program coordinators covering school year 2010-11, 30 state coordinators ranked coordinating with other organizations and agencies to provide and improve services to homeless children and youth among Thirty-six state the three activities on which they spend the most time.coordinators reported that building programmatic linkages among various programs, agencies, or organizations was among the top three collaboration efforts that improved program administration and/or services to homeless children and youth (see fig. 5). The collaborative activities state EHCY program coordinators engage in may vary depending on whether they are collaborating with school districts, other programs, interagency councils, or non-governmental entities (see fig. 6). State EHCY program collaboration activities with other state agencies, programs, service providers, and associations have included raising awareness of the program, developing joint products, such as strategic plans, and efforts to better connect students to services. For example, state EHCY program staff in one state told us they collaborate with the state’s health department and Children’s Health Insurance Program to make sure students have access to immunizations and help homeless families apply for health insurance for their children. Additionally, they collaborate with the state agriculture department to ensure homeless children have immediate access to free school meals. One state coordinator told us that she has partnered with the 21st Century Community Learning Centers (21st CCLC) program, which allowed the state to increase the number of high schools receiving 21st CCLC funding from 5 to 37 and increased the resources available to serve homeless students. In addition, three state coordinators we spoke with told us that they have worked with universities in their states to, for example, provide information about students experiencing homelessness and barriers they face to higher education. Officials in one state told us that this collaboration has led over 30 public institutions of higher learning to identify a single point of contact within the school to advocate for homeless students. 42 U.S.C. § 11432(f)(6). collaboration. For example, he only attends conferences for Title I and other Education programs every few years. Similarly, while state EHCY program coordinators we spoke with generally felt that there is significant collaboration with other entities, several also told us that staff capacity was a significant barrier to further collaboration. State coordinators are generally responsible for managing the EHCY grant process, monitoring and overseeing implementation of the EHCY program at the school district level, and providing technical assistance and training to school district personnel, including homeless liaisons; sometimes in several hundred school districts. Many state coordinators have responsibilities in addition to those for the EHCY program. According to Education’s survey data from school year 2010-11, 23 of the 50 state EHCY program coordinators reported working 30 hours or more per week on EHCY program responsibilities. The same survey found that more than half of the states (27) had one or fewer full-time employees working in the state EHCY program office (see fig. 7). For example, one state coordinator we spoke with was also the coordinator for another education program, which she said takes up about 40 percent of her time. According to state EHCY program coordinators, resource constraints have prevented additional collaboration efforts. For example, three state coordinators told us they must choose which partners to collaborate with at the expense of others. In one of these states, this has meant limited collaboration with RHY programs. In GAO’s previous work, we have emphasized the importance of agencies identifying and leveraging sufficient funding to accomplish their objectives and to initiate or sustain their collaborative efforts and we suggested approaches to help them do so. Federal and state officials also said that differing definitions of who is considered “homeless” under various federal programs was another barrier to collaboration. Because the population eligible for services under each program differs, the populations contained in data collected by one agency will be different than in data collected by other agencies, making data sharing more difficult. The federal EHCY program manager told us that the lack of a consistent definition across programs administered by Education and HUD has created a challenge to increasing data sharing at the local level between school districts and Because a relatively small number of Continuums of Care (CoC).students considered homeless under Education’s definition may be eligible for services funded by CoCs, these entities sometimes feel that the additional work necessary to share the data is not worthwhile, according to the EHCY program manager. Nationally, most homeless students are “doubled up.” Officials from one state told us that because this is the case in their state, though school districts want to provide wrap- around services that meet the wide variety of homeless youths’ needs, these students may be ineligible for services through HUD programs.However, one school district in one of the states we reviewed has entered into an agreement with the local CoC that allows school district personnel to access and enter data into the CoC’s Homeless Management Information System. This partnership allows the school district and CoC partners to track the services homeless students have received, evaluate the need for additional services, and make referrals for those services, among other functions. A state EHCY program coordinator we spoke with told us that a common definition would allow Education and HUD programs to pool data, increasing collaboration and efficiency. This official said that instead of redundant data collection, resources could be used to provide additional services. Education has protocols and procedures in place to monitor state EHCY programs. According to Education guidance, monitoring is the regular and systematic examination of a state’s administration and implementation of a federal education grant, contract, or cooperative agreement. To hold states accountable to provide a free appropriate public education to homeless students, Education evaluates the degree to which states meet certain standards, called monitoring indicatorsThese include data on: (see appendix II). monitoring of school districts; implementation of procedures to identify, enroll, and retain homeless students by coordinating and collaborating with other program offices and state agencies; provision of technical assistance to school districts; efforts to ensure that school district grant plans for services to eligible homeless students meet all requirements; compliance with statutory and other regulatory requirements governing the reservation of funds for state-level coordination activities; and prompt resolution of disputes. Education’s monitoring involves a review of documents, followed by an on-site or videoconference review, and preparation of a final report that includes any compliance findings for which the state must take corrective action (see fig. 8). Between fiscal years 2007 and 2009, Education’s policy was to monitor 50 states and 3 other areas (i.e., 53 “states”)least once during that 3-year time period and it followed this policy. Starting in fiscal year 2010, Education adopted a risk-based approach to select states for monitoring to more efficiently target and prioritize limited resources, resulting in longer gaps between monitoring visits in some states. Education conducts an annual risk assessment to evaluate which states have the highest risk of noncompliance, according to the EHCY program manager. The agency weighs the following four risk assessment criteria equally: the state’s academic proficiency levels for students experiencing homelessness, the tenure of the state EHCY program coordinator, whether there are multiple or recurring EHCY monitoring findings, and a financial review of the state’s EHCY grant expenditures.Education also considers the size of the state’s EHCY grant allocation and the length of time since a state was last monitored. Since Education adopted this approach in fiscal year 2010, it has monitored 31 of the 53 states for the EHCY program—28 from October 2009 to September 2012 and 3 from October 2012 to July 2014. Of the 22 remaining states, Education last monitored 7 states in fiscal year 2007, 6 in fiscal year 2008, and 9 in fiscal year 2009 (see fig. 9). In the fall of 2012 Education began monitoring states that had received waivers from certain ESEA requirements under an initiative Education calls “ESEA Flexibility” and since that time, has monitored three states for compliance with the EHCY program. Under Education’s ESEA flexibility initiative, 44 of the 53 states currently have waivers from specific requirements of the ESEA, as amended. In these states, Education has conducted regular monitoring, but the monitoring only covers compliance with the ESEA waiver requirements and does not include monitoring for compliance with the EHCY program. As a result, the states with ESEA waivers currently have little EHCY program oversight at the federal level. Education officials cited the shift to a risk-based approach to monitoring the EHCY program, the more recent need to focus on ESEA flexibility waiver monitoring, and a lack of staff capacity as the primary reasons why they have not been able to monitor the states as frequently in recent years as in the past. Education officials said they intend to continue to monitor the EHCY program in the future, but the agency has not determined when or how it will do so. Standards for internal control emphasize the need for federal agencies to establish plans to help ensure goals and objectives can be met, including compliance with applicable laws and regulations. Absent a plan for future monitoring of grantees, Education cannot be sure that problems will be identified and resolved promptly. As a result of reducing the number of monitoring visits, Education has been unaware of the compliance status of some states for an extended period of time. According to GAO’s Standards for Internal Control in the Federal Government, monitoring is a key management tool that helps agencies assess the quality of performance over time and ensure that the Additionally, one state EHCY program problems are promptly resolved.coordinator we interviewed said that federal monitoring ensures a higher level of compliance with the McKinney-Vento Act and that it is in a state’s best interest to be monitored regularly. According to this program coordinator, monitoring helps states improve how they address homeless students’ needs and provides important leverage within the state to ensure that program funds are used as intended. Of the 22 states that Education has not monitored since at least fiscal year 2009, 10 had been required to take corrective actions following their last review to address compliance concerns. While Education found these actions to be sufficient, according to officials, Education is currently unaware of whether these states have remained in compliance and it is possible that they (or other states) may have new or recurring compliance issues. For example, one of the states we visited had not monitored its grantees on- site since school year 2008-09, whereas NCHE recommends states monitor grantees on-site at least once every 3 years. Under the McKinney-Vento Act, states are required to develop plans that describe how they will implement the program and submit the plans to Education. 42 U.S.C. § 11432(g). Education is responsible for reviewing state plans using a peer review process and evaluating whether state laws, policies, and practices described in the plan adequately address the problems of homeless children and youths relating to access to education and placement as described in the plan. 42 U.S.C. § 11434(a). updated their plans with new activities and goals. Some states have implemented programmatic changes since the plans were initially required in 2002. For example, one state we visited changed its service delivery model in recent years by adopting a regional approach to award grants to lead school districts that are responsible for providing services to other districts in their region. This state’s EHCY coordinator confirmed that the state has not submitted any state plan updates to Education since 2002. However, because Education is no longer consistently monitoring all states for compliance, the agency is also unable to determine whether states’ current practices are consistent with their existing state plan. GAO’s Standards for Internal Control in the Federal Government suggest that management continually assess and evaluate whether internal control activities—such as reviewing and monitoring compliance with state plans—are effective and up-to-date. The McKinney-Vento Act requires each state to describe in its plan how it will ensure that all school districts comply with EHCY program requirements, but does not specify how states must monitor compliance or where they should focus their efforts. NCHE’s guidance to states on monitoring districts acknowledges the challenge of monitoring all districts on site annually—given resource constraints—and recommends that states utilize a combination of strategies, such as on-site monitoring for grantee districts and desk monitoring, i.e., phone calls or written correspondence, for non-grantee districts. According to Education’s survey of states in school year 2010-11, states use a variety of approaches in monitoring districts. Most states (43 out of 50) monitored grantees on-site and about half (26 out of 50) did so for non-grantees. States also commonly use desk monitoring, with about two-thirds (34 out of 50) using this approach for grantees and about half (26 out of 50) doing so for non-grantees (see table 3). The majority of states in Education’s survey also reported including the EHCY program in the state’s monitoring of other federal programs for both grantee districts (33 out of 50) and non-grantee districts (29 out of 50). States that monitor compliance for the EHCY program in this way are able to reach additional districts without adding to the state EHCY program coordinator’s responsibilities, one of a few strategies that NCHE recommends to cover a large number of districts. For example, in one state we visited, a review team monitors about 100 school districts per year with a lengthy checklist for federal programs that includes the EHCY program. States focused their monitoring on districts that had received an EHCY grant from the state, resulting in limited oversight of many non-grantee districts. NCHE’s guidance indicates that states should monitor grantees at least once during the grant cycle—which can be up to 3 years—and non-grantees at least once every 3-to-5 years. According to Education’s survey, most states (39 out of the 46 that responded to this question) said that they monitor grantees at least once every 2 years. Half (23 out of 46) of the states that responded to this question reported monitoring non- grantees at least once every 2 years and the other half reported monitoring them less frequently (see fig. 10). Additionally, 2 of 50 states reported that they did not monitor non-grantees at all (see table 3). Similarly, state EHCY program coordinators in three of the four states we reviewed focused on monitoring grantees and employed a variety of approaches, according to our interviews with the coordinators. One state coordinator reported monitoring all grantee school districts on-site annually. Two state coordinators reported monitoring grantees annually using a combination of on-site visits and desk reviews, in which grantees are monitored via phone and email and must send data to the state for review. The fourth state coordinator we interviewed reported conducting quarterly fiscal and program reviews in lieu of formal on-site or desk monitoring for all of the grantees since school year 2008-09. The four states we reviewed also varied with regard to how often they monitor non-grantees. State coordinators in two of the four states we reviewed regularly monitor all non-grantee districts within the 3 to 5 year timeframe NCHE suggests. One state coordinator reported monitoring non-grantees on a 3-year rotational basis. Another state coordinator uses desk monitoring for all non-grantees annually and also uses a risk-based approach to select non-grantees for on-site monitoring. state coordinators we interviewed do not regularly monitor non-grantees within the suggested timeframe; however, one of these states annually ensures that all districts have identified a homeless liaison. States may not focus monitoring efforts on non-grantee districts because they lack tools to enforce compliance. One state coordinator explained that the state can record a finding of noncompliance, but because there is no direct fiscal action or penalty attached to it, the district may disregard it. Another state coordinator said she has to accept a district’s report of zero homeless students, even if she suspects that the district may be under-identifying them. In contrast, state coordinators we spoke with said that states can hold districts with EHCY grant funding more accountable because they have more direct leverage over them, in the form of the grant. According to Education’s survey of states, 3 of 39 states that responded to this question reported that they withheld funds from grantees that were out of compliance. This state’s EHCY program coordinator reported annually reviewing each district’s data on homeless students and visiting those non-grantees with the highest risk of noncompliance based on several risk factors, such as fewer than expected homeless students identified given a district’s poverty rates; trends in under-identification; a high number of disputes with neighboring districts and/or complaints from parents; and the size of the district, with larger districts assumed to have a higher risk of noncompliance. districts were aware of the availability of services through the regional structure adopted by the state. This state uses a regional approach for its grantee districts, whereby the grants are distributed to lead entities that are in turn responsible for ensuring that all homeless students within the region receive appropriate services. To do this, the grantee district may provide funds and/or services, such as training, to other districts in its region. While we met with two lead grantee school districts that provided services to other districts in the region, we also met with two other non- grantee districts whose officials were unaware that the district was eligible for funds or services from their respective lead grantee districts, indicating that the state’s regional model was not being implemented effectively. Such problems suggest inadequate state-level monitoring of school districts, reinforcing the importance of effective federal monitoring of states. Education relies on annual state performance data collected from school districts to determine the extent to which states are meeting the program’s intended goal of ensuring that homeless students have access to a free appropriate public education (see table 4). Education has increased the number of data elements it requires school districts, particularly non-grantee districts, to report in the Consolidated State Performance Report. For example, in school year 2010-11, Education began requiring states to report information on the academic achievement of homeless students in reading and mathematics from non- grantee school districts, data that had been required from grantee districts since at least the 2004-05 school year. In school year 2011-12, Education added corresponding data elements in science for all districts to the Consolidated State Performance Report. In school year 2012-13, Education began requiring states to report information on the number of homeless students enrolled in each district, by certain subcategories.Previously, Education had only required states to report this information by subcategory for grantee districts; for non-grantee districts, states were only required to report the total number of homeless students enrolled and their primary nighttime residence. Education presents trends in data on homeless students over time in its Consolidated State Performance Report; however, there are limitations to the use of state-reported data to assess the program’s results. For example, the state data on the number of homeless students are likely incomplete or unreliable due in part to the under-identification of eligible students. In Education’s survey of states, 13 of the 25 states that responded to this question had found that at least one grantee was not in compliance with the statutory requirement to identify homeless children and youth. In one of the four states we reviewed, Education found the data on the number of homeless students to be unreliable, which the state EHCY program coordinator attributed to a 2009-10 transition to a new, state-wide database system that led to undercounting homeless students. Both Education and the state educational agency are taking steps to address the state’s data quality issues, according to officials from both agencies. Education officials told us that the under-representation of the number of homeless students in the data generally is an area of concern, which they are addressing in multiple ways. In addition to questioning states about this issue, an Education official said the agency has worked with a contractor who analyzed school district-level data in every state to help identify school districts that may have been under- identifying homeless students by comparing their rate of homeless student identification to the percentage of students receiving free or reduced-price lunch. The contractor has made presentations on the results of this analysis to state EHCY program coordinators, and Education plans to provide state EHCY program coordinators with additional technical assistance on outreach and identification. Another limitation to the use of state-reported data is that, by design, the data are difficult to compare across states. As GAO has previously reported, states vary in how they measure student academic achievement, as permitted by ESEA, to allow states to address their unique circumstances. Education has acknowledged this inherent limitation, as well as challenges in comparing data within states over time, as many states have made changes to their state assessments that can impact comparability. According to the Common Core State Standards Initiative, a voluntary state-led initiative, 43 states and the District of Columbia have chosen to adopt the Common Core State Standards— designed to define the knowledge and skills students should gain throughout their K-12 education in order to graduate high school prepared to succeed in entry-level careers, introductory academic college courses, and workforce training programs. Most states have chosen to participate in one of two state-led consortia working to develop assessments based on these standards, expected to be available in school year 2014-15, which may lead to greater comparability across states. Similarly, according to agency officials, Education collects dropout and graduation rate data, including disaggregated data for homeless students, but these data are calculated differently in different states—making comparisons across states problematic. GAO has previously reported on the use of different state graduation rates, and recommended that Education provide information to all states on ways to account for different types of students in graduation rate calculations and assess the reliability of state data used to calculate interim rates, which it did. Aside from its annual data collection efforts, Education’s latest study (forthcoming in 2014), which covers school year 2010-11, will provide some valuable information on program implementation, according to agency officials. The study collects information from surveys of state EHCY program coordinators and grantee school district homeless liaisons on topics such as their EHCY-related responsibilities; data collection and use; collaboration with other programs and service providers; barriers homeless students face to enrolling, attending, and succeeding in schools; and state monitoring of districts’ compliance with the McKinney- Vento Act, among other topics. The EHCY program, funded at about $65 million for fiscal year 2014, is intended to remove barriers to educational achievement for homeless students and provide them with access to critical services—such as transportation to school and referrals for health care. Such services are important to help mitigate the range of negative effects experienced by homeless children and youth across an array of measures, including academic achievement and school graduation rates. The program identified more than 1.1 million homeless students in school year 2011- 12—students who, had they not been identified, might have faced greater difficulties succeeding in school and preparing to graduate. To increase the program’s effectiveness, school districts, states, and the federal government leverage existing resources to ensure that homeless students are identified and their various academic and non-academic needs are met. The EHCY program promotes efforts to leverage resources, in part, by requiring coordination and collaboration among various programs and service providers that also serve homeless students. Collaboration can be particularly important for districts and communities addressing significant increases in homelessness following natural disasters. Appropriately identifying eligible students, in collaboration with other providers, is key to ensuring that districts provide homeless students with the services they need. However, challenges districts face in identifying students coupled with potential financial disincentives to identify them due to the cost of providing them with services such as transportation, can lead to under-identification, which has several consequences. First, it can result in barriers to homeless students’ educational stability and achievement. Homeless children and youth who are not identified may have difficulty getting to and from their school as their nighttime residence changes and may therefore be derailed in their attempts to obtain an education without the assistance available through the EHCY program. Second, under-identification complicates Education’s ability to fully assess program results due to concerns about the accuracy and completeness of the data, which it has taken some steps to address. Similarly, under-identification complicates the ability of the U.S. Interagency Council on Homelessness to accurately assess progress toward its goal of ending homelessness for families, youth, and children by 2020. While it may not be possible to accurately determine the extent to which districts may be under-identifying students—and the extent to which these children and youth may not succeed academically— monitoring states and school districts is imperative to ensure compliance with the requirements of the McKinney-Vento Act, particularly in light of the fact that some states are implementing the program differently than described in their state plans. Without monitoring states through regular reviews of state programs and implementation plans, Education will not have the information it needs to determine whether states are meeting requirements that help provide eligible students with the resources needed to pursue an education. To help ensure state compliance with the McKinney-Vento Act, Education should develop a monitoring plan to ensure adequate oversight of the EHCY program. This plan could, for example, determine a schedule of states to be monitored and incorporate procedures to assess whether states need to update their state plans. We provided a draft of this report to the Departments of Education, HHS, and HUD and to USICH for review and comment. Education and USICH provided formal comments that are reproduced in appendices III and IV. Education, HHS, HUD, and USICH also provided technical comments, which we incorporated as appropriate. Education agreed that sufficient oversight of EHCY program requirements at the federal, state, and local levels is necessary and that both inter- agency and cross-program collaboration is essential to ensure that the needs of homeless children and youth are addressed. Education noted that although not all state EHCY programs have been monitored in recent years, the department has continued to conduct risk assessments for all states and to provide technical assistance to states and school districts through the EHCY program office and NCHE. Education concurred with our recommendation that, in order to ensure compliance with the McKinney-Vento Act, the department should develop a monitoring plan to ensure adequate oversight of the EHCY program. Education said that it is currently developing a plan for monitoring for fiscal year 2015 and will increase monitoring for the EHCY program, ensuring that all states identified as “higher risk” in its next round of risk assessments are monitored through document reviews, on-site and remote interviews with state and local educational agency personnel. We encourage Education to continue to consider the length of time since a state was last monitored in its determination of risk and to consider developing a monitoring schedule to help ensure that it has the information it needs to determine whether all states are meeting EHCY program requirements. Education also said that it is making changes to its monitoring protocol, adding questions related to student academic achievement and potential under-identification of homeless students. We support Education’s decision to increase its EHCY program monitoring and believe focusing additional attention on the issue of under- identification is particularly important. Without a fuller sense of the extent of under-identification, it is difficult for Education to gauge program results. Lastly, Education indicated that it plans to include the development of a secure website through which states can update their state plans in its next technical assistance contract. Since states may have changed the way they implement the EHCY program since their state plans were originally developed, we encourage Education to take steps to ensure that states that need to update their plans do so. USICH agreed that monitoring EHCY grantees is important to ensure homeless children and youth are identified. USICH also noted the important role that Education can play in fostering best practices, strategic partnerships, and innovation to address the needs of homeless students. USICH stated that it considers Education and the EHCY program to be critical partners in developing and advancing the work of USICH’s goal of ending homelessness among families, children, and youth by 2020. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretaries of Education, HHS, and HUD, the Executive Director of USICH, relevant congressional committees, and other interested parties. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512- 7215 or Brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We used several approaches to obtain information on how school districts, states, and the Department of Education (Education) implement and oversee the Education for Homeless Children and Youth (EHCY) program. To gather information on how the EHCY program is implemented at the local level, we visited three states—Colorado, New Jersey, and Washington State—and conducted interviews with a fourth state, Texas, by phone. We selected states that represent geographic diversity and were identified by experts, including the National Association for the Education of Homeless Children and Youth (NAEHCY) and other national organizations working on issues related to homelessness, for their experiences in providing education to homeless students. We identified these organizations during the course of our background research. We also considered the number of homeless students states identified in recent years, including any trends, the level of program funding states received, and the delivery structure of early childhood education programs in the state. In addition, we selected states that experienced surges in homelessness due to recent natural disasters—including wildfires, flooding, and a hurricane. Together, these states included about 13 percent of identified homeless students nationwide in school year 2011-12. In each state, we interviewed state officials to obtain information on how they collaborate with other agencies, service providers, and school districts, as well as their monitoring practices. We also met with school district officials. In all, we spoke with representatives from 20 school districts in the four states. We selected school districts, with assistance from state EHCY program coordinators, to represent a mix of urban, suburban, and rural districts, as well as districts that received EHCY program funds (grantees) and those that did not (non-grantees). In one state, we also met with representatives of two regional educational entities that are responsible for providing EHCY- related services to several school districts within their respective regions and receive EHCY program funds to do so. In the states we visited, we also met with school officials at the elementary, middle, and high school levels, and youth that have experienced homelessness, or their families, about their educational experiences (12 youth and three families). Information obtained from these states, school districts, youth, and families is non-generalizable. We also attended the 2012 NAEHCY Conference to obtain further insights into how school districts were implementing the EHCY program and met with five additional state coordinators at that conference to discuss how they collaborated with various stakeholders. To obtain generalizable and national-level information from states and grantee school districts, we analyzed Education data from two surveys covering school year 2010-11. Education’s surveys collected information on services school districts provided to homeless children and youth; state and local collaboration efforts with other Education programs; and how states monitored school districts, including any differences in monitoring among grantee and non-grantee districts. A total of 448 school districts were included in the district-level survey sample, including the 50 largest school districts and a random sample of 401 other school districts (3 districts were removed from the sample after the survey was released because they had merged with other districts). The surveys were conducted electronically, however, in a small number of cases (7 or 8) districts that did not initially respond to the electronic survey were administered the survey over the phone or on paper. The school district- level survey had a weighted response rate of 86 percent (96 percent for the 50 largest districts and 85 percent for the remaining 398 districts). The state survey was sent to all 50 states, the District of Columbia, and the Bureau of Indian Affairs. The Bureau of Indian Affairs did not respond and was later removed from the scope of the survey. One additional state provided incomplete survey answers, leaving 49 states and the District of Columbia as the state survey population. We assessed the reliability of Education’s survey data by performing electronic testing of the data elements, reviewing relevant documentation, and interviewing agency officials knowledgeable about the data. We found that the data were sufficiently reliable for the purposes of this report. The responses of each eligible sample member who provided a useable questionnaire were weighted in the analyses to account statistically for all members of the population. We created weights for each survey respondent to account for unequal probabilities of selection and various unit response rates among the survey strata. All estimates obtained from the school district-level survey have margins of error of no greater than six percentage points. To obtain information on how Education administers the EHCY program, we interviewed Education officials about how the agency monitors states for compliance and collaborates with other federal programs. We also interviewed officials from other federal agencies, including the Departments of Health and Human Services (HHS) and Housing and Urban Development (HUD), as well as the U.S. Interagency Council on Homelessness (USICH), to obtain their perspectives on collaborating with Education on programs that serve homeless children and youth. We also interviewed Education officials about the agency’s monitoring efforts. Additionally, we reviewed relevant documents—including federal laws and regulations, monitoring protocols, and policy memos—and examined Education’s findings on homeless education from state monitoring reports. We conducted this performance audit from July 2012 through July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal monitoring indicator 1.1: The state conducts monitoring and evaluation of school districts with and without subgrants, sufficient to ensure compliance with McKinney-Vento program requirements. Selected examples of acceptable evidence that states provide Written procedure for monitoring school districts with and without subgrants to include: Recent copy of monitoring policies and procedures, schedules for current and previous school years. Sample notification letters to school districts, preparation checklists, or other forms. A copy of the interview protocol for school district reviews. Most recent copies of reports, recommendations and follow-up to corrective actions. 2.1: The state implements procedures to address the identification, enrollment and retention of homeless students through coordinating and collaborating with other program offices and state agencies. Written communication to school districts updating state policies and procedures that address the problems homeless children and youth face in school enrollment and retention since the last Department of Education program review. Updates to the state plan, including the completion of planned activities and proposals for new state-level activities. 2.2: The state provides, or provides for, technical assistance to school districts to ensure appropriate implementation of the statute. Copies of written guidance to school districts and/or information dissemination materials distributed electronically or by other means. The most recent liaison orientation, on-line trainings, conferences, and regional training agendas and technical assistant log. 3.1 The state ensures that school district subgrant plans for services to eligible homeless students meet all requirements. Evidence that the state has an application and approval process to provide competitive subgrants to school districts. 3.2: The state complies with the statutory and other regulatory requirements governing the reservation of funds for state-level coordination activities. State budget detail on reserved funds for state-level coordination activities for the current fiscal year and use of funds for the last fiscal year. 3.3: The state has a system for ensuring the prompt resolution of disputes. Updated state dispute resolution policy and procedures including: procedures for tracking disputes documents indicating that dispute procedures have been implemented records indicating that disputes are addressed, investigated and resolved in a timely manner Evidence that the state tracks if school districts have a dispute resolution policy in place. In addition to the contact named above, Kathryn Larin (Assistant Director), Avani Locke (Analyst-in-Charge), David Barish, and Jennifer Cook made key contributions to this report. Also contributing to this report were Alicia Cackley, Sarah Cornetto, Keira Dembowski, Justin Fisher, Hedieh Fusfield, Jessica Gray, Thomas James, Jean McSween, Mimi Nguyen, Paul Schmidt, Almeta Spencer, Kathleen van Gelder, and Amber Yancey-Carroll. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014. Homelessness: Fragmentation and Overlap in Programs Highlight the Need to Identify, Assess, and Reduce Inefficiencies. GAO-12-491. Washington, D.C.: May 10, 2012. Homelessness: To Improve Data and Programs, Agencies Have Taken Steps to Develop a Common Vocabulary. GAO-12-302T. Washington, D.C.: December 15, 2011. K-12 Education: Many Challenges Arise in Educating Students Who Change Schools Frequently. GAO-11-40. Washington, D.C.: November 18, 2010. Homelessness: A Common Vocabulary Could Help Agencies Collaborate and Collect More Consistent Data. GAO-10-702. Washington, D.C.: June 30, 2010. Runaway and Homeless Youth Grants: Improvements Needed in the Grant Award Process. GAO-10-335. Washington, D.C.: May 10, 2010. Disconnected Youth: Federal Action Could Address Some of the Challenges Faced by Local Programs That Reconnect Youth to Education and Employment.GAO-08-313. Washington, D.C.: February 28, 2008.
The McKinney-Vento Homeless Assistance Act established a grant program to help the nation's homeless students—more than one million in school year 2011-12—have access to public education. Under the Education for Homeless Children and Youth grant program, states and their school districts are required to identify homeless children and provide them with needed services and support. In fiscal year 2014, Education received about $65 million to administer this program. Education provided formula grants to states, which competitively awarded funds to school districts to help meet program requirements. GAO was asked to review program implementation and oversight. GAO examined (1) how districts identify and serve homeless students and challenges they face (2) how Education and states collaborate with other service providers to address student needs and any barriers, and (3) the extent to which Education monitors program compliance. GAO reviewed relevant federal laws, guidance, and reports, and analyzed Education's state and school district survey data from school year 2010-11. GAO also interviewed federal officials, and state and local officials in 20 school districts—representing a mix of urban, suburban, and rural districts and grant status—in four states, selected for geographic diversity and other characteristics, such as experience with natural disasters. To identify and serve homeless students under the Education for Homeless Children and Youth (EHCY) program, officials in the 20 school districts where GAO conducted interviews reported conducting a range of activities to support homeless youth, but cited several challenges. With regard to GAO's interviews, 13 of the 20 districts identified homeless students through housing surveys at enrollment, while all 20 relied on referrals from schools or service providers. However, officials in 8 of the 20 districts noted that the under-identification of homeless students was a problem. Districts GAO reviewed provided eligible students with transportation to and from school, educational services, and referrals to other service providers for support such as health care or food assistance. Among the challenges that officials in the 20 districts cited were limited staff and resources to provide services, the cost of transportation, student stigma associated with homelessness, and responding to students made homeless by natural disasters. Nationally, school districts surveyed most recently in school year 2010-11 by the Department of Education (Education) reported providing many services while facing similar challenges. Education's EHCY program manager and state program coordinators have collaborated with other government agencies and with private organizations by sharing information, participating in interagency councils on homelessness, and providing technical assistance to relevant staff. In addition, state EHCY program coordinators have provided training to school districts and helped connect local programs to ensure homeless students receive various services. However, federal and state officials frequently cited limited resources and differing federal definitions of homelessness as constraints to greater collaboration. Education has protocols for monitoring state EHCY programs, but no plan to ensure adequate oversight of all states, though monitoring is a key management tool for assessing the quality of performance over time and resolving problems promptly. Prior to fiscal year 2010, it had been Education's policy to monitor 50 states and 3 area programs at least once during a 3-year period, and it did so for fiscal years 2007 to 2009. Subsequently, the department adopted a risk-based approach in fiscal year 2010 and monitored 28 states over the next 3 years. In fiscal year 2013, Education again changed its approach to EHCY program monitoring and has monitored 3 state programs since then. Department officials cited other priorities and a lack of staff capacity as reasons for the decrease in oversight. As a result, Education lacks assurance that states are complying with program requirements. GAO found gaps in state monitoring of districts that could weaken program performance, reinforcing the importance of effective federal monitoring of states. GAO recommends that Education develop a plan to ensure adequate oversight of the EHCY program. Education concurred with our recommendation.
The SAB provides a mechanism for EPA to receive peer review and other advice in the use of science at EPA. The SAB is authorized to, among other things, review the adequacy of the scientific and technical basis of EPA’s proposed regulations. The SAB and its subcommittees or panels focus on a formal set of charge questions on environmental science received from the agency. Depending on the nature of the agency’s request, the entire advisory process from the initial discussion on charge questions with EPA offices and regions to the delivery of the final SAB report generally takes from 4 to 12 months. Under the Clean Air Act, air quality criteria must accurately reflect the latest scientific knowledge useful in indicating the kind and extent of all identifiable effects on public health or welfare, which may be expected from the presence of certain air pollutants in the ambient air. economic, or energy effects that may result from various strategies for attainment and maintenance of the NAAQS. CASAC’s advisory process is similar to the SAB’s process, including the option of establishing subcommittees and panels that send their reports and recommendations to CASAC. As federal advisory committees, the SAB and CASAC are subject to FACA, which broadly requires balance, independence, and transparency. FACA was enacted, in part, out of concern that certain special interests had too much influence over federal agency decision makers. The head of each agency that uses federal advisory committees is responsible for exercising certain controls over those advisory committees. For example, the agency head is responsible for establishing administrative guidelines and management controls that apply to all of the agency’s advisory committees, and for appointing a Designated Federal Officer (DFO) for each advisory committee. Advisory committee meetings may not occur in the absence of the DFO, who is also responsible for calling meetings, approving meeting agendas, and adjourning meetings. As required by FACA, the SAB and CASAC operate under charters that include information on their objectives, scope of activities, and the officials to whom they report. Federal advisory committee charters must be renewed every 2 years, but they can be revised before they are due for renewal in consultation with the General Services Administration (GSA). Unlike CASAC, which was established by amendments to the Clean Air Act, the SAB was established under ERDDAA, and since 1980, has been required to provide scientific advice to designated congressional committees when requested. According to SAB staff office officials, until recently, the SAB has responded to general congressional questions and concerns. However, in 2013, representatives of a congressional committee formally requested advice from the SAB regarding two reviews the SAB was conducting. According to EPA officials, this was the first time representatives of a congressional committee formally requested advice from the SAB. Both requests were addressed and submitted directly to the SAB Chair and the Chair of the relevant SAB panel and sent concurrently to the SAB staff office and EPA Administrator. While ERDDAA does not specify a role for EPA in mediating responses from the SAB to the designated congressional committees, EPA identifies such a role for itself under FACA. Specifically, EPA points to the DFO’s responsibility to manage the agenda of an advisory committee. Also, under FACA, EPA is responsible for issuing and implementing controls applicable to its advisory committees. Responses to the committee’s requests for scientific advice were handled by the SAB staff office and EPA’s OCIR. The SAB staff office and, later, OCIR responded to the committee’s first request for advice, and OCIR responded to the committee’s second request for advice. See table 1 for more information on these requests. Our preliminary observations indicate that EPA’s procedures for processing congressional requests for scientific advice from the SAB do not ensure compliance with ERDDAA because the procedures are incomplete and do not fully account for the statutory access designated congressional committees have to the SAB. Specifically, EPA policy documents do not clearly outline how the EPA Administrator, the SAB staff office, and members of the SAB panel are to handle a congressional committee’s request for advice from the SAB. In addition, EPA policy documents do not acknowledge that the SAB must provide scientific advice when requested by select congressional committees. EPA’s written procedures for processing congressional committee requests to the SAB are found in the SAB charter and in the following two documents that establish general policies for how EPA’s federal advisory committees are to interact with outside parties: EPA Policy Regarding Communication Between Members of Federal Advisory Committee Act Committees and Parties Outside of the EPA (the April 2014 policy), and Clarifying EPA Policy Regarding Communications Between Members of Scientific and Technical Federal Advisory Committees and Outside Parties (the November 2014 policy clarification). Collectively, the SAB’s charter, EPA’s April 2014 policy, and EPA’s November 2014 policy clarification provide direction for how EPA and the SAB are to process requests from congressional committees. However, these documents do not clearly outline procedures for the EPA Administrator, the SAB staff office, and members of the SAB panel to use in processing such requests. At the time of the House committee’s two requests to the SAB in 2013, the SAB charter was the only EPA document that contained written policy relating to congressional committee requests under ERDDAA. The SAB charter briefly noted how congressional committees could access SAB advice, stating; “While the SAB reports to the EPA Administrator, congressional committees specified in ERDDAA may ask the EPA Administrator to have SAB provide advice on a particular issue.” (GAO italics) Beyond what the charter states, however, no EPA policy specified a process the Administrator should use to have the SAB review a congressional request and provide advice. In response to a request from the SAB staff office that EPA clarify the procedures for handling congressional committee requests, EPA, through an April 4, 2014, memorandum informed the SAB that committee members themselves and the federal advisory committees as a whole should refrain from directly responding to these external requests. Attached to the memorandum was the April 2014 policy that stated: “if a FACA committee member receives a request relating to the committee’s work from members of Congress or their staff, or congressional committees, the member should notify the DFO, who will refer the request to the EPA OCIR. OCIR will determine the agency’s response to the inquiry, after consulting with the relevant program office and the DFO.” This policy, however, did not provide more specific details on processing requests from congressional committees under ERDDAA. In November 2014, EPA issued a clarification to the April 2014 policy, specifying that SAB members who receive congressional requests pursuant to ERDDAA should acknowledge receipt of the request and indicate that EPA will provide a response. The November 2014 policy clarification does not identify the SAB as having to provide the response. The November 2014 policy clarification also stated that the request should be forwarded to the appropriate DFO and that decisions on who and how best to respond to the requests would be made by EPA on a case-by-case basis. While the November 2014 policy clarification provides greater specificity about processing requests, it is not consistent with the SAB charter because the policy indicates that congressional committee requests should be handled through the DFO, whereas the charter indicates that they should be handled through the EPA Administrator and provides no further information. A senior EPA official stated that the agency considered that the charter and the November 2014 policy clarification differed in the level of detail, but not in the broad principle that the agency is the point of contact for congressional requests to the SAB (and SAB responses to those requests). However, under the agencies are to clearly document federal standards of internal control,internal controls and the documentation is to appear in management directives, administrative policies, or operating manuals. While EPA has documented its policies, they are not clear because the charter and the November 2014 policy clarification are not consistent about which office should process congressional requests. Agency officials said that the SAB charter is up for renewal in 2015. By modifying the charter when it is renewed to reflect the language in the November 2014 policy clarification—that congressional requests should be forwarded to the appropriate DFO—EPA can better ensure that its staff process congressional committee requests consistently when the agency receives such a request. Moreover, neither the April 2014 policy nor the November 2014 policy clarification clearly documents EPA’s procedures for reviewing congressional committee requests to determine which questions would be taken up by the SAB, consistent with the federal standards of internal control. Because EPA’s procedures for reviewing congressional committee requests are not documented, it will be difficult for EPA to provide reasonable assurance that its staff is appropriately applying criteria when determining which questions the SAB will address. EPA officials told us that internal deliberations in response to a congressional request follow those that the agency would apply to internal requests for charges to the SAB. Specifically, officials told us that EPA considers whether the questions are science or policy driven, whether they are important to science and the agency, and whether the SAB has already undertaken a similar review. However, these criteria are not documented. In addition, under ERDDAA, the SAB is required to provide requested scientific advice to select committees, regardless of EPA’s judgment. As EPA has not fully responded to the committee’s two 2013 requests to the SAB, by clearly documenting its procedures for reviewing congressional requests to determine which questions should be taken up by the SAB and criteria for evaluating requests, the agency can provide reasonable assurance that its staff process these and other congressional committee requests consistently and in accordance with both FACA and ERDDAA. Furthermore, the charter states that, when scientific advice is requested by one of the committees specified in ERDDAA, the Administrator will, when appropriate, forward the SAB’s advice to the requesting congressional committee. Neither the charter nor the April 2014 policy and November 2014 policy clarification specify when it would be “appropriate” for the EPA Administrator to forward the SAB’s advice to the requesting committee. Such specificity would be consistent with federal standards of internal control that call for clearly documenting internal controls. Without such specification, the perception could be created that EPA is withholding information from Congress that the SAB is required to provide under ERDDAA. EPA officials stated that the EPA Administrator does not attempt to determine whether advice of the SAB contained in written reports should be forwarded to the requesting committee and that all written reports are publically available on the SAB website at the same time the report is sent to the EPA Administrator. By modifying the charter or other policy documents to reflect when it is and when it is not appropriate for the EPA Administrator to forward the advice to the requesting committee, EPA can better ensure transparency in its process. In general, under FACA, as a federal advisory committee, the SAB’s agenda is controlled by its host agency, EPA. As such, the SAB generally responds only to charge questions put to it by EPA although, under ERDDAA, the SAB is specifically charged with providing advice to its host agency as well as to designated congressional committees. In addition, it is EPA’s responsibility under GSA regulations for implementing FACA to ensure that advisory committee members and staff understand agency-specific statutes and regulations that may affect them, but nothing in the SAB charter, the April 2014 policy, or the November 2014 policy clarification communicates that, ultimately, SAB must provide scientific advice when requested by congressional committees. For example, we found no mechanism in EPA policy for the SAB to respond on its own initiative to a congressional committee request for scientific advice unrelated to an existing EPA charge question. A written policy for how the SAB should respond to a congressional committee request that does not overlap with charge questions from EPA would be consistent with federal internal control standards. Moreover, such a policy would better position the SAB to provide the advice it is obligated to provide under ERDDAA and for EPA to provide direction consistent with GSA regulations for implementing FACA. We will continue to monitor these issues and, as we finalize our work in this area, we will consider making recommendations, as appropriate. We plan to issue our final results in June 2015. CASAC has provided certain types of advice related to the review of NAAQS. The Clean Air Act requires CASAC to review air quality criteria and existing NAAQS every 5 years and advise EPA of any adverse public health, welfare, social, economic, or energy effects that may result from various strategies for attainment and maintenance of NAAQS. According to a senior EPA official, CASAC has carried out its role in reviewing the air quality criteria and the NAAQS but has never provided advice on adverse social, economic, or energy effects of strategies to implement the NAAQS because EPA has never asked it to. This is in part because NAAQS are to be based on public health and welfare criteria, so information on the social, economic, or energy effects of NAAQS are not specifically relevant to setting NAAQS. In a June 2014 letter to the EPA Administrator, CASAC indicated that, at the agency’s request, it would review the impacts (e.g., the economic or energy impacts) of strategies for attaining or maintaining the NAAQS but stressed that such a review would be separate from reviews of the scientific bases of NAAQS. In response to such a request, the letter stated that an ad hoc CASAC panel would be formed to obtain the full expertise necessary to conduct such a review. According to a senior EPA official, the agency has no plans to ask CASAC to provide advice on adverse effects. Chairman Rounds, Ranking Member Markey, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. Information from EPA-requested reviews could be useful for the states, which implement the strategies necessary to achieve the NAAQS. EPA is required to provide states, after consultation with appropriate advisory committees, with information on air pollution control techniques, including the cost to implement such techniques. 42 U.S.C. § 7408(b)(1) (2015). According to a senior-level EPA official, EPA collects this information from other federal advisory committees, the National Academy of Sciences, and state air agencies, among others, and EPA fulfills this obligation by issuing Control Techniques Guidelines and other implementation guidance. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Janet Frisch (Assistant Director), Antoinette Capaccio, and Greg Carroll. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
EPA formulates rules to protect the environment and public health. To enhance the quality and credibility of such rules, EPA obtains advice and recommendations from the SAB and CASAC—two federal advisory committees that review the scientific and technical basis for EPA decision making. ERDDAA requires the SAB to provide both the EPA Administrator and designated congressional committees with scientific advice as requested. Amendments to the Clean Air Act established CASAC to, among other things, provide advice to the Administrator on NAAQS. This testimony reflects GAO's preliminary observations from its ongoing review that examines (1) the extent to which EPA procedures for processing congressional requests to the SAB ensure compliance with ERDDAA and (2) the extent to which CASAC has provided advice related to NAAQS. GAO reviewed relevant federal regulations and agency documents, and interviewed EPA, SAB, and other relevant officials. GAO is not making any recommendations in this testimony, but as it finalizes its work in this area, GAO will consider making recommendations, as appropriate. The Environmental Protection Agency's (EPA) procedures for processing congressional requests for scientific advice from the Science Advisory Board (SAB) do not ensure compliance with the Environmental Research, Development, and Demonstration Authorization Act of 1978 (ERDDAA) because these procedures are incomplete. For example, they do not clearly outline how the EPA Administrator, the SAB staff office, and others are to handle a congressional committee's request. While the procedures reflect EPA's responsibility to exercise general management controls over the SAB and all its federal advisory committees under the Federal Advisory Committee Act (FACA), including keeping such committees free from outside influence, they do not fully account for the specific access that designated congressional committees have to the SAB under ERDDAA. For example, EPA's policy documents do not establish how EPA will determine which questions would be taken up by the SAB. EPA officials told GAO that, in responding to congressional requests, EPA follows the same process that it would apply to internal requests for questions to the SAB, including considering whether the questions are science or policy driven or are important to science and the agency. However, EPA has not documented these criteria. Under the federal standards of internal control, agencies are to clearly document internal controls. Moreover, under ERDDAA, the SAB is required to provide requested scientific advice to select committees. By clearly documenting how to process congressional requests received under ERDDAA, including which criteria to use, EPA can provide reasonable assurance that its staff process responses consistently and in accordance with law. Furthermore, EPA's charter states that, when scientific advice is requested by one of the committees specified in ERDDAA, the Administrator will, when appropriate forward the SAB's advice to the requesting congressional committee. EPA policy does not specify when it would be “appropriate” for the EPA Administrator to take this action. Such specificity would be consistent with clearly documenting internal controls. GAO will continue to monitor these issues and plans to issue a report with its final results in June 2015. The Clean Air Scientific Advisory Committee (CASAC) has provided certain types of advice related to the review of national ambient air quality standards (NAAQS), but has not provided advice on adverse social, economic, or energy effects related to NAAQs. Under the Clean Air Act, CASAC is to review air quality criteria and existing NAAQS every 5 years and advise EPA of any adverse public health, welfare, social, economic, or energy effects that may result from various strategies for attainment and maintenance of NAAQS. An EPA official stated that CASAC has carried out its role in reviewing the air quality criteria and the NAAQS, but CASAC has never provided advice on adverse social, economic, or energy effects related to NAAQS because EPA has never asked CASAC to do so. In a June 2014 letter to the EPA Administrator, CASAC indicated it would review such effects at the agency's request. According to a senior EPA official, the agency has no plans to ask CASAC to provide advice on such adverse effects.
Roughly half of all workers participate in an employer-sponsored retirement or pension plan. Private sector pension plans are classified either as defined benefit (DB) or as defined contribution (DC) plans. DB plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement, regardless of how the plan investments perform. In contrast, benefits from DC plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. Examples of DC plans include 401(k) plans, employee stock ownership plans, and profit-sharing plans. The most dominant and fastest growing DC plans are 401(k) plans, which allow workers to choose to contribute a portion of their pretax compensation to the plan under section 401(k) of the Internal Revenue Code. IRAs were established under the Internal Revenue Code provisions of the Employee Retirement Income Security Act of 1974 (ERISA). ERISA was generally enacted to protect the interests of employee benefit plan participants and their beneficiaries by requiring the disclosure to them of financial and other information concerning the plan; by establishing standards of conduct for plan fiduciaries; and by providing for appropriate remedies and access to the federal courts. To give IRAs flexibility in accumulating assets for retirement, Congress designed a dual role for these accounts. The first role is to provide individuals not covered by employer-sponsored retirement plans an opportunity to save for retirement on their own in tax-deferred accounts. The second role was to give retiring workers or individuals changing jobs a way to preserve assets in employer-sponsored retirement plans by allowing them to roll over or transfer plan balances into IRAs. Over the past 30 years, Congress has created several types of IRAs designed with different features for individuals and small businesses. The types of IRAs geared toward individuals are: Traditional IRAs: Traditional IRAs allow individuals to defer taxes on investment earnings accumulated in these accounts until distribution at retirement. Eligible individuals may make tax-deductible contributions of earned income to these accounts. Other individuals may make nondeductible contributions to receive the tax deferral on earnings. Yearly contribution amounts are subject to limits based on income, pension coverage, and filing status. Taxpayers over age 70½ cannot contribute and must begin required minimum distributions from these accounts. Withdrawals are generally taxable; and early distributions made before age 59½, other than for specific exceptions, are subject to a 10 percent additional income tax. Roth IRAs: In the Taxpayer Relief Act of 1997, Congress created the Roth IRA, which allows eligible individuals to make after-tax contributions to these accounts. After age 59½, enrollees may take tax-free distributions of their investment earnings. Withdrawals of investment earnings before age 59½ are subject to a 10 percent additional income tax and other taxes. Yearly contribution amounts are subject to limits based on income and filing status. There are no age limits on contributing, and no distributions are required during the Roth IRA owner’s lifetime. Withdrawals are generally tax-free after age 59½, as long as the taxpayer held the account for 5 years; early distributions other than for specific exceptions are subject to an additional 10-percent income tax. Traditional and Roth IRAs can also be established as payroll-deduction IRAs, which requires employer involvement. Payroll-deduction IRA Programs (also called payroll-deduction IRAs): Through payroll-deduction IRAs, employees may establish either traditional or Roth IRAs, and employees may contribute to these accounts through voluntary deductions from their pay, which are forwarded by the employer to the employee’s IRA. As long as employers follow guidelines set by Labor for managing the payroll-deduction IRA, employers are not subject to the fiduciary requirements in ERISA Title I that apply to employer-sponsored retirement plans, like 401(k) plans. Other types of IRAs that are intended to encourage savings through employers include: SEP IRAs: In the Revenue Act of 1978, Congress established SEP IRAs, which were designed with fewer regulatory requirements than traditional employer pension plans to encourage small employers to offer retirement plans to their workers. SEP IRAs allow employers to make tax deductible contributions to their own and each eligible employee’s account. SEP IRAs have higher contribution limits than other IRAs, but they do not permit employee contributions. Yearly contributions are not mandatory, but as with pension plans, they must be based on a written allocation formula and cannot discriminate in favor of highly-compensated employees. SIMPLE IRAs: In the Small Business Job Protection Act of 1996, Congress created SIMPLE IRAs to help employers with 100 or fewer employees more easily provide a retirement savings plan to their employees. In this plan, eligible employees can direct a portion of their salary, within limits, to a SIMPLE IRA and employers may either match the employees’ contribution up to 3 percent or make nonelective, 2 percent contributions of each employee’s salary for all employees making at least $5,000 for the year. This IRA replaced the Salary Reduction Simplified Employee Pension IRA (SAR-SEP IRA)—-a tax-deferred retirement plan provided by sole proprietors or small businesses with fewer than 25 employees. New SAR-SEP IRAs could not be established after December 31, 1996, but plans in operation at that time were allowed to continue. Each of these IRAs have their own eligibility requirements, as shown in table 1. Labor’s Employee Benefits Security Administration (EBSA) shares responsibility for overseeing the IRA component of ERISA with IRS. EBSA enforces Title I of ERISA, which specifies, among other standards, certain fiduciary and reporting and disclosure requirements and seeks to ensure that fiduciaries operate their plans in the best interest of plan participants. IRS enforces Title II of ERISA, which provides, among other standards, tax benefits for plan sponsors and participants, including participant eligibility, vesting, and funding requirements. IRA assets have surpassed DC plan assets and DB plan assets, but the majority of assets that flow into IRAs come from assets being rolled over from other accounts, not from contributions. We also found that IRA ownership is associated with higher education and higher income levels. The percentage of households that own IRAs is similar to those that participate in 401(k) plans, and total contributions to IRAs are lower than contributions to 401(k) accounts. In addition, there are key differences between the structure of employer-sponsored IRAs and that of 401(k)s. Since 1998, IRA assets have comprised the largest portion of the retirement market. As shown in figure 1, in 2004, IRA assets totaled about $3.5 trillion compared to DC assets of $2.6 trillion and DB assets of $1.9 trillion. Most assets flowing into IRAs come from the transfer of retirement assets between IRAs or from other retirement plans, including 401(k) plans, not from contributions. These “rollovers” allow individuals to preserve their retirement savings when they change jobs or retire. As shown in figure 2, from 1998 to 2004, over 80 percent of funds flowing into IRAs came from rollovers, demonstrating that IRAs play a smaller role in building retirement savings than they play in preserving retirement savings. IRA accounts that contain rollover assets also exceeded those without rollover assets. For example, in 2007, the median amount in a traditional IRA with rollover assets was $61,000, while the median amount in a traditional IRA without rollover assets was $30,000. Traditional and Roth IRA ownership is associated with higher education and income levels. In 2004, 59 percent of IRA households were headed by an individual with a college degree, and only about 3 percent were headed by an individual with no high school diploma. Over one-third of these IRA households earned $100,000 or more, and less than 2 percent earned less than $10,000. Households with IRAs also tend to own their homes. Research shows that higher levels of education and household income correlate with a greater propensity to save. Therefore, it is not surprising that IRA ownership increases as education and income levels increase. Lastly, IRA ownership is highest among households headed by individuals aged 45 to 54. More households own traditional IRAs, which were the first IRAs established, than Roth IRAs or employer-sponsored IRAs. In 2007, nearly 33 percent of all households owned traditional IRAs, and about 15 percent owned Roth IRAs. In contrast, about 8 percent of households participated in employer-sponsored IRAs. The percentage of households that own IRAs is similar to the percentage that own 401(k)s, but IRA contributions are less than 401(k) contributions. In 2004, 29 percent of households owned individually arranged IRAs, and 26 percent participated in 401(k) plans (see fig. 3). Ten percent of households own a traditional or Roth IRA and participate in 401(k) plans. Although contributions to both 401(k) plans and IRAs increased from 2002 to 2004, 401(k) contributions were almost four times greater than those made to IRAs. Few studies have been done that have compared contributions by IRA owners and 401(k) participants. However, one study assessed the consistency of taxpayer annual contributions to traditional IRAs and to 401(k) plans from tax years 1999 to 2002. As shown in figure 4, the study found that only 1.4 million taxpayers contributed to their traditional IRAs in all 4 years, while nearly 16 million taxpayers contributed to their 401(k) accounts in the same time period. The study found that the persistency in making IRA contributions may partially be attributed to limits in the tax deductions some owners could take for their contributions. Certain criteria, including age, income, tax filing status, and coverage in a work-based retirement plan, affect the tax deduction taxpayers could take for contributing to an IRA. In addition, a study by the Investment Company Institute that included data on contributions by IRA owners shows that more households with Roth IRAs or employer-sponsored IRAs contribute to their accounts than households with traditional IRAs. For example, in 2004, more than half of households with Roth, SAR-SEP, or SIMPLE IRAs contributed to their accounts, but less than one-third of households with traditional IRAs contributed to their accounts. This, again, may be partly attributed to the emerging role of traditional IRAs as a means to preserve rollover assets more than to build retirement savings. The Investment Company Institute study also stated that the median household contribution to traditional IRAs was $2,300 compared to the median contribution to Roth IRAs of $3,000. The median contribution to SAR-SEP and SIMPLE IRAs was $5,000. The study noted that this difference may be related to the higher contribution limits for employer-sponsored IRAs than for traditional IRAs and Roth IRAs. Table 2 shows contributions limits for the current tax year. Comprehensive comparisons between IRAs and 401(k) plans are difficult because of differences in plan structures. 401(k) plans are sponsored by employers, whereas most households with IRAs own traditional IRAs established outside of the workplace. In addition, most of the assets in IRAs are in traditional IRAs that are set up by individuals and provide individual investors with a vehicle to contribute to their own retirement savings. Employer-sponsored IRAs, such as SIMPLE and SEP, were established for small employers who lack the resources to provide a 401(k) plan. In addition, payroll deduction IRA programs enable small employers to provide employees the opportunity to save for retirement. Key differences exist between employer-sponsored IRAs and 401(k) plans, as shown in table 3. Several barriers may discourage small employers from offering payroll- deduction and employer-sponsored IRAs to their employees. Although employer-sponsored IRAs were designed with fewer reporting requirements to encourage small employers to offer them, few employers appear to do so. In addition, few employers appear to offer payroll- deduction IRA programs. Retirement and savings experts said payroll- deduction IRAs could help many workers save for retirement and these IRAs may be the easiest way for small employers to offer a retirement savings opportunity to their employees. Several barriers, including costs, may discourage employers from offering them; however, information is lacking on the actual costs to employers. In addition, several experts raised questions on how expanded payroll-deduction IRAs may affect employees. Employer-sponsored IRAs offer greater savings opportunities than payroll-deduction IRAs, but employer sponsorship of IRAs may also be hindered by costs, including required employer contributions. Retirement and savings experts offered several legislative proposals to encourage employers to offer and employees to participate in IRAs, but limited government actions have been taken to increase the number of employers sponsoring employer-sponsored IRAs. Employees of small firms are more likely to lack access to a retirement plan at work than employees of larger firms, and several barriers may limit small employers from offering payroll-deduction programs and employer- sponsored IRAs to their employees. Although IRAs have been largely successful at helping individuals preserve their retirement savings through rollovers, experts told us that IRA participation falls short of Congress’ first goal for creating IRAs—to provide a tax-preferred account for workers without employer-sponsored retirement plans to save for their retirement. For example, millions of employees of small firms lack access to a workplace retirement plan. The Congressional Research Service found that private-sector firms with fewer than 100 employees employed about 30.9 million full-time workers between the ages of 25 and 64 in 2006. About 19.9 million of those workers lacked access to an employer- sponsored retirement plan, as shown in figure 5. To address the issue of low retirement plan sponsorship among small employers, Congress created SEP and SIMPLE employer-sponsored IRAs, and has encouraged employers not offering a retirement plan to offer payroll-deduction IRAs. These IRAs were designed to have fewer and less burdensome reporting requirements than 401(k) plans to encourage participation, and payroll-deduction IRA programs do not have any employer reporting requirements. Payroll-deduction and employer- sponsored IRAs offer several advantages, as shown in table 4. Labor issued a regulation under which an employer could maintain a payroll deduction program for employees to contribute to their IRAs without being considered a pension plan under ERISA. Through payroll- deduction IRAs, an employer withholds and forwards an amount determined by the employee directly to an IRA (traditional or Roth) established by the employee. Although any employer can provide payroll- deduction IRAs to their employees, regardless of whether or not they offer another retirement plan, retirement and savings experts told us that very few employers offer their employees the opportunity to contribute to IRAs through payroll deduction. Further, Labor and IRS officials told us that data is limited on how many employers offer payroll-deduction IRAs. Because there are no reporting requirements for payroll-deduction IRAs, and very limited reporting requirements for employer-sponsored IRAs—as discussed later in this report—we were unable to determine exactly how many employers offer these IRAs to their employees. For example, because an employer’s responsibility with payroll-deduction IRAs is to forward employee contributions to IRAs, employers are not required to report to the federal government that they are providing this service to employees. Consequently, neither Labor nor IRS is able to determine how many employers offer payroll-deduction IRAs. Employee access to SIMPLE and SEP IRAs also appears limited. SIMPLE IRAs are only available to firms with 100 employees or fewer who do not already offer another retirement plan; and SEP IRAs are available to employers of any size, including those who may offer either a DC or DB plan. The Bureau of Labor Statistics reported that, in 2005, 8 percent of private sector workers in firms with fewer than 100 employees participated in a SIMPLE IRA, and 2 percent of workers participated in a SEP IRA. An IRS evaluation of employer-filed W-2 forms estimated that in 2004, 190,000 employers sponsored SIMPLE IRAs. However, officials told us that this figure was likely understated, as it does not include accounts that may be owned by sole proprietors or individuals who own unincorporated businesses by themselves, who are not required to file W-2 forms. GAO was unable to determine the number of employers sponsoring SEP plans, but IRS data from 2002 show more taxpayers owned SEP than SIMPLE IRAs, with 3.5 million SEP accounts compared to 2 million SIMPLE accounts. Retirement and savings experts reported that increased worker access to payroll-deduction IRAs could help many workers to save for retirement at work. Through payroll-deduction IRA programs, employees may either contribute to traditional or Roth IRAs, depending on the eligibility requirements of these plans. Any individual under the age of 70½ with taxable compensation may contribute to a traditional IRA, and many individuals could receive a tax deduction for their contribution. Most low- and middle-income individuals are eligible to contribute to Roth IRAs. In theory, all of the estimated 20 million employees of small firms mentioned previously who lacked an employer-sponsored retirement plan in 2006 could be eligible to contribute to a traditional IRA through payroll- deduction; and many of these individuals would be eligible to claim a tax deduction for their contribution. According to Labor’s guidance on payroll-deduction IRAs and several experts we interviewed, individuals are more likely to save in IRAs through payroll deductions than they are without deductions. Payroll deductions are a key feature in 401(k) and other DC plans. Economics literature that we reviewed identifies payroll deduction as a key factor in the success of 401(k) plans, and participation in these plans is much higher than in IRAs, which do not typically use payroll deduction. According to the Congressional Budget Office, in 2003, 29 percent of all workers contributed to a DC plan, while only 7 percent of all workers contributed to an IRA. According to recent economics literature that we reviewed, several papers point to the importance of employment-based defaults, employer endorsements, and advice from peers as factors that may influence an employee’s decision to participate in a retirement plan. The influential role that employers may have in an employee’s decision to participate in a workplace plan may encourage some employees to also participate in payroll-deduction IRAs. Payroll deduction facilitates retirement savings by addressing key behavioral barriers of procrastination and inertia, or a lack of action, according to economics literature that we reviewed and experts we interviewed. Although many individuals have intentions to save for retirement, some may procrastinate because retirement is seen as a remote event and more immediate expenses take precedence. Some individuals also experience inertia because they lack knowledge on how to save or have difficulty making decisions with a number of complex options. Literature that we reviewed states that payroll deduction gives employees a “commitment device” to help them automatically contribute to retirement before wages are spent, relieving them of making ongoing decisions to save. Retirement and savings experts and representatives of small business and consumer groups told us payroll-deduction IRAs are the easiest way for small employers to offer their employees a retirement savings vehicle. According to Labor publications and experts, payroll-deduction IRAs provide employers with a low-cost retirement benefit for their employees, because these IRAs do not permit employer contributions. Payroll- deduction IRAs also have fewer requirements for employee communication than SIMPLE and SEP IRAs, and employers are not subject to ERISA fiduciary responsibilities so long as they meet the conditions in Labor’s regulation and guidance for managing these plans. Finally, payroll-deduction IRAs allow employers to select a single IRA provider to service the accounts to keep administrative costs down and simplify the process for employees. Despite these advantages, payroll-deduction IRAs may present several limitations which discourage employers from offering a payroll-deduction IRA program, including: (1) costs to small employers for setting up payroll deductions, (2) lack of flexibility to promote payroll-deduction IRAs to employees, (3) lack of incentives to employers, and (4) lack of awareness about how these IRAs work. Costs to employers. Additional administrative costs associated with setting up and managing payroll-deduction IRAs may be a barrier for small employers, particularly for those without electronic payroll processing. According to Labor, costs to employers are significantly influenced by the number of IRA providers an employer must remit contributions to on behalf of employees. As such, Labor’s guidance allows employers to select a single IRA provider for all employees. Also, under Labor’s guidance, an IRA sponsor may reimburse the employer for the actual costs of operating a payroll-deduction IRA as long as such costs do not include profit to the employer. Small business groups told us that costs could also be influenced by the number of employees participating in the program and whether an employer has a payroll processing system in place to make automatic deductions and direct deposits to employee accounts. Several experts told us that many small employers lack electronic, or automatic, payroll systems, and these employers would be subject to higher management costs for offering payroll-deduction IRAs. Moreover, representatives from small business groups and other experts told us that providing health care insurance is a more pressing issue to many small employers than providing a retirement savings opportunity. Although experts reported that payroll-deduction IRAs represent costs to employers, we found that opinions on the significance of those costs varied. Experts advocating for expanded payroll-deduction IRAs reported that most employers would incur little to no costs since most employers already make payroll deductions for Social Security and Medicare, as well as federal, state, and local taxes. According to these experts, payroll- deduction IRAs function similarly to existing payroll tax withholdings and adding another deduction would not be a substantial requirement. However, other experts reported that costs to employers may be significant. One report indicated that costs to employers for managing payroll-deduction IRAs were substantial, particularly for employers without electronic payrolls; however, the study did not estimate what the actual costs to employers may be on a per account basis. In our review, we were unable to identify reliable government data on actual costs to small employers. Flexibility to Promote Payroll-Deduction IRAs. According to IRA providers, some employers are hesitant to offer a payroll-deduction IRA program because they find Labor’s guidance limits their ability to effectively publicize the availability of payroll-deduction IRAs to employees for fear of being subject to ERISA requirements. Labor officials told us they issued this guidance to make it easier for employers to understand the guidelines to follow in order to maintain the safe harbor that applies to payroll-deduction IRAs. This guidance explains the conditions under which employers can offer payroll-deduction IRAs and not be subject to the ERISA reporting and fiduciary responsibilities, which apply to employer retirement plans, like 401(k) plans. Labor officials said they have not received any feedback from employers or IRA providers on the clarity of the guidance since it was issued in 1999. However, at the time the guidance was issued, some employers had indicated to Labor that they were hesitant to offer payroll-deduction IRAs due to ERISA fiduciary responsibilities. IRA providers told us that employers need greater flexibility in Labor’s guidance to promote payroll-deduction IRAs and provide a greater sense of urgency to employees to save for retirement. However, Labor told us that it has received no input from IRA providers as to what that flexibility would consist of, and Labor officials note that Interpretive Bulletin 99-1 specifically provides for flexibility. Lack of savings incentives for small employers. Small business member organizations and IRA providers said that the contribution limits for payroll-deduction IRAs do not offer adequate savings incentives to justify the effort to offer these IRAs. Because the contribution limits to these IRAs are significantly lower than those that apply to SIMPLE and SEP IRAs, employers seeking to provide a retirement plan to their employees would be more likely to choose other options, which allow business owners to contribute significantly more to their own retirement than payroll-deduction IRAs allow. Lack of awareness. One reason payroll-deduction IRA programs have not been widely adopted by employers may be a lack of awareness about how payroll-deduction and other IRAs work. Representatives from small business groups said many small employers are unaware that payroll- deduction IRAs are available or that employer contributions are not required. However, Labor has produced educational materials describing the payroll-deduction and employer-sponsored IRA options available to employers and employees, and one Labor official told us that that Labor has received positive feedback from small businesses for their efforts. IRA providers told us they experience challenges in marketing IRAs because varying eligibility requirements make it difficult to communicate IRA benefits to a mass market. Instead, providers said it is more efficient to market IRAs to current customers and focus advertising budgets on capturing rollover IRAs. Some experts questioned whether increased worker access to payroll- deduction IRA programs will in fact lead to increased participation and retirement savings for many workers. For example, IRA providers and experts expressed concerns that low- and moderate-income workers may choose not to participate in payroll-deduction IRAs because they lack discretionary income. Many low- and moderate-income workers are already eligible to contribute to IRAs, but have chosen not to do so because they lack sufficient income to save for retirement. Experts raised doubts that payroll-deduction IRA programs would lead to adequate retirement savings, as low-income individuals would be unable to contribute to these IRAs consistently. Further, experts said that individuals with low-balance IRAs would be inclined to make early withdrawals and be subject to additional income taxes. Experts also reported that because the incentives for tax-deferred IRA contributions are based on marginal tax rates, lower-income individuals receive a lower immediate tax subsidy than higher income individuals. Two experts told us that policymakers should begin their evaluation of payroll-deduction IRAs by calculating how much savings is required for an adequate standard of living in retirement, and then determine what role payroll- deduction IRAs could play in reaching that level. We found that employer-sponsored SEP and SIMPLE IRAs can help small employers and their workers to save for their retirement, but several factors may discourage small employers from offering these IRAs to their employees. Experts said the higher contribution limits and flexible employer contribution options of SEP and SIMPLE IRAs offer greater savings benefits to employers and employees than payroll-deduction IRAs. For example, the 2007 SIMPLE contribution limit of $10,500 per year for individuals under age 50 is more than twice the amount allowed in 2007 in payroll-deduction IRAs. In 2007, SEP IRAs allowed employers to contribute the lesser of 25 percent of an employee’s compensation or up to $45,000. Moreover, because SIMPLE IRAs require employers to match the contributions of participating employees or to make “nonelective” contributions to all employee accounts, employees are able to save significantly more per year in SIMPLE accounts than they are in payroll- deduction IRAs. Under SEP rules, employers must set up SEP IRAs for all employees working for them in at least 3 of the past 5 years who have reached age 21 and received at least $500 in compensation in 2007, and employees may not contribute to their own accounts. Annual employer contributions are not mandatory; however, if an employer decides to contribute, they must make contributions to the SEP IRAs of all employees performing services in that year. Because annual contributions are not mandatory for SEP IRAs, employers have the flexibility to adjust contributions depending on business revenues. Employers offering SIMPLE IRAs must either make a nonelective contribution of 2 percent of each eligible employee’s compensation or a minimum of a 1 percent match to the SIMPLE IRAs of those employees who choose to contribute to their accounts. Certain factors may limit employer sponsorship of SIMPLE and SEP IRAs. Small business groups told us that the costs of managing SEP and SIMPLE IRAs may be prohibitive for small employers. Experts also pointed out that contribution requirements for SIMPLE and SEP plans may, in some cases, limit employer sponsorship of these plans. For example, because SIMPLE IRAs require employers to make contributions to employee accounts, some small firms may be unable to commit to these IRAs. Small business groups and IRA providers told us that small business revenues are inconsistent and may fluctuate greatly from year to year, making required contributions difficult for some firms. In addition, employers offering SIMPLE IRAs must determine before the beginning of the calendar year whether they will match employee contributions or make nonelective contributions to all employees’ accounts. According to IRA providers, this requirement may discourage some small employers from offering these IRAs, and if employers had the flexibility to make additional contributions to employee accounts at the end of the year, employers may be encouraged to contribute more to employee accounts. With regard to SEP IRAs, two experts said small firms may be discouraged from offering these plans because of the requirement that employers must set up a SEP IRA for all employees performing service for the company in 3 of the past 5 years and with more than $500 in compensation for 2007. These experts stated that small firms are likely to hire either seasonal employees or interns who may earn more than $500, and these employers may have difficulty finding an IRA provider willing to open an IRA small enough for these temporary or low-earning participants. Retirement and savings experts reported that several legislative proposals could encourage employers to offer and employees to participate in IRAs. While several bills have been introduced in Congress to expand worker access to payroll-deduction IRAs, limited government actions have been taken to increase the number of employers sponsoring employer- sponsored IRAs. Employer incentives to offer IRAs. Several retirement and savings experts said additional incentives should be in place to increase employer sponsorship of IRAs. For example, experts suggested tax credits should be made available to defray start-up costs for small employers of payroll- deduction IRAs, particularly for those without electronic or automatic payroll systems. These credits should be lower than the credits available to employers for starting SIMPLE, SEP, and 401(k) plans to avoid competition with those plans, these experts said. IRA providers and small business groups said increasing contribution limits for SIMPLE IRAs to levels closer to those for 401(k) plans would encourage more employers to offer these plans. Other experts said doing so could provide incentives to employers already offering 401(k) plans to switch to SIMPLE IRAs, which have fewer reporting requirements. Employee incentives to participate in IRAs. Experts offered several proposals to encourage workers to participate in IRAs, including: (1) expanding existing tax credits for moderate- and low-income workers, (2) offering automatic enrollment in payroll-deduction IRAs, and (3) increasing public awareness about the importance of saving for retirement and how to do so. Several experts said expanding the scope of the Retirement Savings Contribution Credit, commonly known as the saver’s credit, could encourage IRA participation among workers who are not covered by an employer-sponsored retirement plan. They said expanding the saver’s credit to include more middle-income earners and making the credit refundable—available to tax filers even if they do not owe income tax—could encourage more moderate- and low-income individuals to participate in IRAs. However, an expanded and refundable tax credit would have revenue implications for the federal budget. Other experts told us that automatically enrolling workers into payroll-deduction and SIMPLE IRAs could increase employee participation; however, small business groups and IRA providers said that mandatory automatic enrollment could be burdensome to small employers. In addition, given the lack of available income for some, several experts told us that low- income workers may opt out of automatic enrollment programs or be more inclined to make early withdrawals, which can result in additional income taxes. Experts also said increasing public awareness of the importance of saving for retirement and educating individuals how to do so could increase IRA participation. Several experts reported the growth of DC plans and IRAs has resulted in individuals bearing greater responsibility for their own retirement and earlier and more frequent information about retirement savings could encourage IRA participation. IRS and Labor share oversight for all types of IRAs, but Labor lacks a process to monitor all IRAs and data gaps exist. IRS is responsible for tax rules on establishing and maintaining IRAs, while Labor is responsible for oversight of fiduciary standards for employer-sponsored IRAs. Payroll- deduction IRAs are not under Labor’s jurisdiction; however, Labor does provide guidance to help ensure such a retirement program is not subject to the Title I requirements of ERISA. Reporting requirements for employer- sponsored IRAs are limited. Under Title I, there is no reporting requirement for SIMPLE IRAs, and an alternative method available for reporting of employer-sponsored SEP IRAs. Labor does not have processes in place to identify all employers offering IRAs, numbers of employees participating, and employers not in compliance with the law. Obtaining information about employer-sponsored and payroll-deduction IRAs is also important to determine whether these vehicles help workers without pensions and 401(k) plans build retirement savings. Although IRS publishes some IRA data, IRS has not consistently produced IRA reports. IRS and Labor share responsibility for overseeing IRAs. IRS has primary responsibility for tax rules governing how to establish and maintain an IRA, as shown in figure 5. Labor has sole responsibility for overseeing ERISA’s fiduciary standards for employer-sponsored IRAs. Fiduciaries have an obligation, among others, to make timely contributions to fund benefits. When contributions are delinquent for those IRAs subject to Labor’s jurisdiction, Labor investigates and takes action to ensure that contributions are restored to the plan. Labor also issues guidance related to payroll-deduction IRAs. In 1999, Labor issued an interpretive bulletin that consolidated Labor regulations and various advisory opinions on payroll-deduction programs for IRAs into one set of guidance. Specifically, the bulletin sets out Labor’s safe harbor under which an employer may establish a payroll-deduction IRA program without inadvertently establishing an employee benefit plan subject to all of the ERISA requirements. Labor and IRS also work together to oversee IRA prohibited transactions; generally, Labor has interpretive jurisdiction and IRS has certain enforcement authority. Both ERISA and the Internal Revenue Code contain various statutory exemptions from the prohibited transaction rules and Labor has authority to grant administrative exemptions and establish exemption procedures. Labor has interpretive authority over prohibited transactions and may grant administrative exemptions on a class or individual basis for a wide variety of proposed transactions with a plan. IRS has responsibility related to imposing an excise tax on parties that engage in a prohibited transaction. Reporting requirements for employer-sponsored IRAs are limited. Currently, the financial institution/trustee handling the employer- sponsored IRA provides the IRS and participants with annual statements containing contribution and fair market value information on IRS Form 5498, IRA Contribution Information, as shown in figure 7. Distributions from that same plan are reported by the financial institution making the distribution to both IRS and the recipients of the distributions on IRS Form 1099-R, Distributions from Pension, Annuities, Retirement or Profit-Sharing Plans, IRA, Insurance Contracts, etc., as shown in figure 8. Information on retirement plans are also reported annually by employers and others to IRS on its Form W-2, which contains the amounts deducted from wages for contributions to pension plans, as well as codes that provide more details on the kinds of plans, such as employer-sponsored IRAs, where the contribution was made, as shown in figure 9. Employers who offer payroll-deduction IRAs have no reporting requirements, and consequently, there is no reporting mechanism that captures how many employers offer payroll-deduction IRAs. Although IRS receives information reports for all traditional and Roth IRAs, those data do not show how many of those IRAs were for employees using payroll- deduction IRAs. In our discussions with Labor and IRS officials, they explained that the limited reporting requirements for employer-sponsored IRAs were put in place to try to encourage small employers to offer their employees retirement plan coverage by reducing their administrative and financial burdens. According to Labor officials, IRS does not share the information it receives with Labor because it is confidential tax information. IRS clarified that it does not share tax information involving specific employers or employees with Labor because it is confidential. Consequently, Labor does not have information on employer-sponsored IRAs. Labor also does not receive information, such as annual financial reports, from such employers, as it does from private pension plan sponsors. For example, pension plan sponsors must file Form 5500 reports with Labor on an annual basis, which provides Labor with valuable information about the financial health and operation of private pension plans. Labor’s Bureau of Labor Statistics (BLS) National Compensation Survey surveys employee benefit plans in private establishments, receiving information on access, participation, and take-up rates for DB and DC plans. The BLS survey, however, collects less information on employer-sponsored IRAs. Given the limited reporting requirements for employer-sponsored IRAs and the absence of requirements for payroll-deduction IRAs, as well as Labor’s role in overseeing these IRAs, a minimum level of oversight is important to ensure that employers are acting in accordance with the law. Yet, Labor officials said that they are unable to monitor (1) whether all employers are in compliance with the prohibited transaction rules and fiduciary standards, such as by making timely and complete employer- sponsored IRA contributions or by not engaging in self-dealing; and (2) whether all employers who offer a payroll-deduction IRA are meeting the conditions of Labor’s guidance. Employer-sponsored IRAs: Labor officials said that they do not have a process for actively seeking out and determining whether employer- sponsored IRAs are engaging in prohibited transactions or not abiding by their fiduciary responsibilities, such as by having delinquent or unremitted employer-sponsored IRA contributions. Instead, as in the case of Labor’s oversight of pension plans, Labor primarily relies on participant complaints as sources of investigative leads to detect employers that are not making the required contributions to their employer-sponsored IRA. For example, according to Labor officials, about 90 percent of its IRA investigations were the result of participant complaints. However, while Labor has other processes in place for private pension plan oversight, such as computer searches and targeting to identify ERISA violations, Labor does not have other processes for IRA investigation leads. Unlike its oversight of pension plans, Labor is at a greater risk of not being able to ensure that all IRA sponsors are in compliance with the laws designed to protect individuals’ retirement savings. Payroll-deduction IRAs: Through payroll-deduction IRAs, employees may establish either traditional or Roth IRAs, and employees may contribute to these accounts through voluntary deductions from their pay, which are forwarded by the employer to the employee’s IRA. As long as employers meet the conditions in Labor’s regulation and guidance, employers are not subject to the fiduciary requirements in ERISA Title I that apply to employer-sponsored retirement plans, such as 401(k) plans. According to Labor officials, if they become aware of an employer operating a payroll-deduction IRA that may not be following agency guidance, Labor will conduct an investigation to determine if the IRA should be treated as an ERISA pension plan. The IRA may be become subject to the requirement of Title I of ERISA, which includes filing a detailed annual report (Form 5500) with Labor. Labor officials said this was done in an effort to ensure that plans are being operated and maintained in the best interest of plan participants. Labor officials told us that they are not aware of employers improperly relying on the safe harbor regarding payroll-deduction IRAs. However, without a process to monitor payroll-deduction IRAs, Labor cannot be certain of the extent or nature of certain employer activities which may fall outside of the guidance provided by Labor. For example, Labor does not know the extent to which employers are sending employee contributions to IRA providers, exercising any influence over the investments made or permitted by the IRA provider, or receiving any compensation in connection with the IRA program except reimbursement for the actual cost of forwarding the payroll deduction. In addition, Labor does not have information on the number of employers that are operating payroll-deduction IRAs. Ensuring that information is obtained about employer-sponsored and payroll-deduction IRAs by regulators is one way to help them and others determine the status of these IRAs and whether those individuals who lack employer-sponsored pension plans are able to build retirement savings through employer-sponsored and payroll-deduction IRAs. However, key information on IRAs is currently not reported, such as information that identifies employers offering payroll-deduction IRAs, distribution by employers of the number of employees that contribute to payroll- deduction IRAs, and distribution by employer of the type of payroll- deduction IRA account offered (traditional or Roth) and the total employee contributions to these accounts. Experts that we interviewed said that, without information on the distribution by employer of the type of payroll-deduction IRA offered and the total employee contributions to these accounts, they are unable to determine how many employers and employees participate in payroll-deduction and the extent to which these IRAs have contributed to the retirement savings of its participants. In addition, the limited reporting requirements prevent information from being obtained about the universe of employers that offer employer- sponsored and payroll-deduction IRAs. Also, without information on the distribution by employer of the type of payroll-deduction IRA offered, and the total employee contributions to these accounts, it is difficult to determine the extent to which payroll-deduction IRAs are being used and to determine ways to increase retirement savings for workers not covered by an employer-sponsored pension plan. This information can be useful when determining policy options to increase IRA participation among uncovered workers because it provides a strong foundation to assess the current extent to which these IRAs are being utilized and information about the people that are participating in these plans. Although IRS does publish some of the information it receives on IRAs through its Statistics of Income program (SOI), IRS does not produce IRA reports on a consistent annual basis. IRS officials told us that they are currently facing three major challenges that affect their ability to publish IRA information on a more consistent basis. First, IRS relies, in part, on information returns to collect data on IRAs, which are not due until the following year after the filing of the tax return. IRS officials said that these returns have numerous errors, making it difficult and time-consuming for IRS to edit them for statistical analysis. They also said that the IRA rules, and changes to those rules, make it difficult for some taxpayers, employers, and trustees to understand, which contributes to filing errors. Second, IRS’s reporting of IRA data is not a systematic process. In the past, the production of IRS reports on IRAs was done on an ad hoc basis. IRS officials told us that they recognize this problem and are in the early stages of determining ways to correct it. Third, in the past, one particular IRS employee, who has recently retired, took the lead for developing a statistical analysis on IRAs. Since IRS does not have a process in place to train another employee to take over this role, a knowledge gap was created that IRS is trying to fill. Labor officials and retirement and savings experts told us that without the consistent reporting of IRA information by IRS, they use studies by financial institutions and industry associations for research purposes, which include assessing the current state of IRAs and future trends. These experts said that although these studies are helpful, some may double count individuals because one person may have more than one IRA at different financial institutions. They also said that more consistent reporting of IRA information could help them ensure that their analyses reflect current and accurate information about retirement assets, such as the fair market value of IRAs. Since IRS is the only agency that has data on all IRA participants, consistent reporting of these data could give policymakers and others a comprehensive look at the IRA landscape. Thirty years ago, when Congress created IRAs, these accounts were designed, in part, to help workers who do not have pensions or 401(k) plans save for their retirement. Currently, IRAs play a major role in preserving retirement assets but a very small role in creating them. Although studies show that individuals find it difficult to save for retirement on their own, millions of U.S. workers have no retirement savings plan at work. Employer-sponsored and payroll-deduction IRAs afford an easier way for workers, particularly those who work for small employers, to save for retirement. They also offer employers less burdensome reporting and legal responsibilities than defined benefit pension plans and defined contribution plans, such as 401(k) plans. Yet, encouraging employers to offer IRAs to their employees will not be productive if Congress and regulators do not make sure that there is also adequate information and improved oversight of employer-sponsored and payroll-deduction IRAs. Given that limited reporting requirements for employer-sponsored IRAs and the absence of reporting requirements for payroll-deduction IRAs were meant to encourage small employers to offer retirement plans to employees, providing more complete and consistent data on IRAs would help ensure that regulators have the information they need to make informed decisions about how to increase coverage and facilitate retirement savings. Currently, IRS collects information on employer-sponsored IRAs that it does not share with Labor because it is confidential tax information, but IRS does report summary information on employer-sponsored IRAs that could be useful for Labor to have on a consistent basis. Without IRS sharing such information, data on IRAs will continue to be collected on an episodic basis, and mapping the universe of IRAs, especially employer-sponsored IRAs, will continue to be difficult. Steps must be taken to improve oversight of payroll-deduction IRAs and determine whether direct oversight is needed. Currently, neither Labor nor IRS is able to determine how many employers are offering their employees the opportunity to contribute to traditional or Roth IRAs through payroll-deduction IRA programs, and Labor has no process in place—nor responsibility—to monitor employers offering payroll- deduction IRAs. Consequently, Labor is unable to determine the universe of employers offering payroll-deduction IRAs, the prevalence and nature of activities that fall outside Labor’s safe harbor, and the impact on employees. As a result, Labor lacks key information on employers who offer payroll-deduction IRAs. Without information on the number of employers offering these IRAs to employees, and the number of employees participating in these programs, neither Labor nor IRS is able to determine the effectiveness of payroll-deduction IRAs in facilitating retirement savings for workers lacking an employer-sponsored pension. Moreover, given that payroll-deduction IRAs currently lack direct oversight, it is important to decide whether such oversight is needed. Without direct oversight, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in these programs, which is particularly important given the current focus in Congress on expanding payroll-deduction IRAs. However, any direct oversight of payroll-deduction IRAs should be done in a way that does not pose an undue burden on employers or their employees. Although the limited reporting requirements for employer-sponsored IRAs and the absence of reporting requirements for payroll-deduction IRAs were meant to encourage small employers to offer retirement savings vehicles to employees, there is also a need for those responsible for overseeing retirement savings vehicles to have the information necessary to do so. This will help ensure that there is a structure in place to help protect individuals’ retirement savings if they choose either employer- sponsored or payroll-deduction IRAs. If current oversight vulnerabilities are not addressed, future problems could emerge as more employers and workers participate in employer-sponsored and payroll-deduction IRAs. However, any improvements to plan oversight and data collection should be done in a way that does not pose an undue burden on employers or their employees. Given the absence of direct oversight of payroll-deduction IRAs, Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. We recommend that the Secretary of Labor take the following three actions: 1. To increase retirement plan coverage for the millions of workers not covered by an employer-sponsored pension plan and the possibility that payroll-deduction IRAs can help bridge the coverage gap, examine ways to better encourage employers to offer and employees to participate in these IRAs that could include: examining and determining the financial and administrative costs to employers for establishing payroll-deduction IRA programs, especially for those employers that do not have an automatic payroll system in place; developing policy options to help employers defray the costs associated with establishing payroll-deduction IRA programs, while taking into consideration the potential costs to taxpayers and small employers; and evaluating whether modifications or clarifications to its guidance on payroll-deduction IRAs are needed to encourage employers to establish payroll-deduction IRA programs. 2. To improve the federal government’s ability to regulate employer- sponsored and payroll-deduction IRAs and protect plan participants, evaluate ways to determine whether employers who establish employer-sponsored IRAs and offer payroll-deduction IRAs are in compliance with the law and the safe harbor provided under Labor’s regulations and Interpretive Bulletin 99-1, while taking employer burden into account. 3. To improve the federal government’s ability to better assess ways to improve retirement plan coverage for workers who do not have access to an employer-sponsored retirement plan, and to provide Congress, federal agencies, and the public with more usable and relevant information on all IRAs, evaluate ways to collect additional information on employer-sponsored and payroll-deduction IRAs, such as adding questions to the Bureau of Labor Statistics National Compensation Survey that provide: information sufficient to identify employers that offer payroll- deduction and employer-sponsored IRAs and the distribution by employer of the number of employees that contribute to payroll-deduction and employer-sponsored IRAs. We also recommend the Commissioner of the Internal Revenue Service take the following two actions: 1. To supplement information Labor would receive through the Bureau of Labor Statistics National Compensation Survey, provide Labor with summary information on IRAs and information collected on employers that sponsor IRAs. 2. Considering the need for federal agencies, Congress, and the public to have access to timely and useful information on IRAs, release its reports on IRA contributions, accumulations, and distributions on a consistent basis, such as annually. We provided a draft of this report to the Secretary of Labor, the Secretary of the Treasury, and the Commissioner of Internal Revenue. We obtained written comments from the Assistant Secretary of Labor and from the Commissioner of Internal Revenue, which are reproduced in appendixes II and III. Both agencies neither agreed nor disagreed with our recommendations, and provided more information about what each agency was currently doing. Treasury and both EBSA and BLS within Labor provided technical comments, which were incorporated in the report where appropriate. Labor clearly stated in its comments that payroll-deduction IRAs are not under Labor’s jurisdiction. We agree with Labor and have revised our report to reflect Labor’s authority. As stated in our report, Labor does provide guidance to help ensure that payroll-deduction programs are not subject to the Title I requirements of ERISA. In addition, we described in our report that IRS’s responsibility over IRAs is to provide tax rules governing how to establish and maintain an IRA. As previously described in the report, several bills have been introduced to Congress to expand worker access to payroll-deduction IRAs. However, without direct oversight of payroll-deduction IRAs, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in such programs, which is particularly important given the increasing role that IRAs have in retirement savings. Given that Labor and IRS do not have direct oversight over payroll-deduction IRAs, we added the matter for congressional consideration to the report suggesting that Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. In response to our first recommendation that Labor should examine and determine the financial and administrative costs to employers for establishing payroll-deduction IRA programs for their employees, Labor neither agreed nor disagreed with the recommendation and stated that payroll-deduction IRAs are not under its jurisdiction. However, as a part of its broad program of research, Labor studies costs and expenses related to retirement programs and said it will consider GAO’s recommendation in developing its research agenda on costs and expenses related to retirement programs. Labor also stated that its Interpretive Bulletin 99-1 addresses the costs related to payroll-deduction IRA programs, which states that employers may select one IRA sponsor to receive payroll contributions to keep administrative costs down, and that employers can receive payments from an IRA sponsor to cover the actual costs of operating the IRA payroll- deduction program. Even though Labor’s Interpretive Bulletin addresses some costs related to payroll-deduction programs, because we do not know the actual costs of managing a payroll-deduction IRA program, it is difficult to determine if these remedies are sufficient. For example, if the actual costs of maintaining such a program are minimal—as some experts have suggested—limiting employees to one IRA provider may discourage some employees from participating in the program unnecessarily. On the other hand, if the costs of managing these programs are significant—as other experts have suggested—this allowance may be insufficient to encourage employers to offer a payroll-deduction IRA program. Labor also noted that Interpretive Bulletin 99-1 indicates that employees can receive payments from an IRA sponsor to cover the actual costs of operating the IRA payroll-deduction program. However, employers may not receive any consideration beyond “reasonable compensation for services actually rendered in connection with payroll deductions.” Without an accurate assessment of what the actual costs of operating these programs are to employers, Labor may be unable to readily determine whether such programs fall outside the safe harbor and may be considered to have become ERISA Title I programs. Furthermore, without accurate cost estimates and a determination of what constitutes “reasonable compensation” to employers, employers may be reluctant to seek compensation from IRA service providers to defray the costs of operating a payroll-deduction IRA program. In response to our recommendation that Labor should develop policy options to help employers defray the costs associated with establishing payroll-deduction IRA programs, Labor stated that Interpretive Bulletin 99- 1 advises employers on how to defray the costs of operating payroll- deduction IRA programs without subjecting the program to coverage under ERISA, but also noted that payroll-deduction IRAs operated in accordance with Interpretive Bulletin 99-1 are outside of Labor’s jurisdiction. Consequently, Labor suggested that the development of additional policy options to help employers defray costs may be more properly considered by the Secretary of Treasury. We believe some further examination by Treasury and Labor of this area would be appropriate. We believe that any policy options proposed to defray costs to employers should, in fact, be based on an accurate assessment of what the actual costs to employers of managing such programs. Efforts to identify appropriate policies to defray costs would be most efficiently executed if coordinated with the process of determining the actual costs of managing payroll deduction programs, and that responsibility may lie more with Labor. Proposals designed to defray employer costs that are not determined by an accurate accounting of actual costs to employers’ risks providing either an excessive or insufficient benefit to employers. Labor stated that Interpretive Bulletin 99-1 advises employers on how to defray the costs of operating payroll-deduction IRA programs without subjecting the program to coverage under ERISA. In response to our recommendation that Labor evaluate whether modifications or clarifications to its guidance on payroll-deduction IRAs are needed, Labor stated that the draft report does not provide specifics regarding why employers believe they cannot effectively publicize the availability of payroll-deduction IRAs, and stated that Labor had not received any input from employers or IRA sponsors about being unable to effectively publicize the availability of payroll-deduction IRAs. Our report includes a discussion of the barriers identified by retirement and savings experts that may discourage employers from offering payroll-deduction IRAs to employees. IRA providers told us that Labor’s guidance lacks adequate flexibility for employers to promote these IRAs to their employees, without operating outside of the safe harbor and potentially becoming subject to ERISA Title I requirements. In addition, as we noted in our report, employers have indicated that they are hesitant to offer payroll-deduction IRAs due to the possibility that ERISA fiduciary responsibilities could apply. In response to our second recommendation that Labor evaluate ways to determine whether employers who establish employer-sponsored IRAs and offer payroll-deduction IRAs are in compliance with law, while taking employer burden into account, Labor simply described its enforcement program and its reliance on targeting, and stated that during the past three fiscal years, 170 SIMPLE IRAs and SEP plans had been investigated with approximately $1.2 million obtained in monetary results. We acknowledge that Labor’s enforcement program for employer- sponsored IRAs has led to investigations and has produced monetary results. However, as indicated in our report, Labor has primarily relied on the complaints of participants as sources for its investigations, as about 90 percent of its investigations into employer-sponsored IRAs were the result of participant complaints. In addition, our report indicates that because of the limited reporting requirements for employer-sponsored IRAs, Labor does not have specific information on employers that sponsor such IRAs, or even how many there are. Because Labor lacks such information, it is unable to target and investigate potential ERISA violations for employer- sponsored IRAs. We do not believe the information provided by Labor on its enforcement activities precludes our recommendation and we believe our recommendation remains valid. Regarding our third recommendation that Labor evaluate ways to collect additional information on employer-sponsored and payroll-deduction IRAs, Labor’s comments focused on statutory requirements and policy considerations, and stated that any collection of information on employer- sponsored and payroll-deduction IRAs should not impose burdens on employers to report information. The intent of our recommendation was to evaluate alternative, less burdensome approaches to obtain important information, such as through the Bureau of Labor Statistics National Compensation Survey. As we noted in our report, key information on IRAs is currently not reported and ensuring that such information is obtained can help determine valuable information about whether employers are choosing to sponsor employer-sponsored IRAs or offer payroll-deduction IRAs, and whether individuals are able to build retirement savings through these vehicles. We do not believe the information provided by Labor makes our recommendation less important and we believe our recommendation remains valid. In response to our recommendation that IRS provide Labor with summary information on IRAs and information collected on employers that sponsor IRAs, and release its reports on IRA contributions, accumulations, and distributions on a consistent basis, IRS stated that it recognizes the need for federal agencies and others to have access to routine and timely information on IRAs and then listed the information it currently provides. IRS also stated that it will continue to provide data and ensure that Labor receives information on IRAs on the same day that such information is published or otherwise made available to the public. Although IRS will be providing summary information on all IRAs to Labor and for public information, we stand by our recommendation that IRS should also consider providing information to Labor and others on employers that sponsor IRAs, such as the number of employers that sponsor SEP and SIMPLE IRAs, which is currently absent in the information IRS stated it would provide to Labor. We are sending copies of this report to the Commissioner of Internal Revenue, the Secretary of Labor, the Secretary of the Treasury; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. During our review, our objectives were to (1) compare individual retirement account (IRA) assets to assets in pension plans, (2) describe the barriers that may discourage small employers from offering employer- sponsored and payroll-deduction IRAs to their employees, and (3) describe how the Internal Revenue Service (IRS) and the Department of Labor (Labor) oversee IRAs and assess the adequacy of oversight and information of employer-sponsored and payroll-deduction IRAs. To identify how IRA assets compare to assets in pension plans and to describe the demographic characteristics of IRA owners, we reviewed reports with published data from the Federal Reserve’s Survey of Consumer Finance (SCF), Statistics of Income (SOI), and relevant industry surveys. The following is a list of the studies we reviewed: Copeland, Craig. “Individual Account Retirement Plans: An Analysis of the 2004 Survey of Consumer Finances.” Issue Brief, no. 293 (Washington, D.C., Employee Benefit Research Institute, May 2006). This report is based on analysis of data from the 2004 SCF. SCF is a triennial survey that asks extensive questions about household income and wealth components. In 2004, it sampled 4,522 households. The Employee Benefit Research Institute (EBRI) is a private nonprofit organization that conducts public policy research on economic security and employee benefits issues. Its membership includes a cross-section of pension funds, businesses, trade associations, labor unions, health care providers and insurers, government organizations, and service firms. Holden, Sara and Michael Bogdan. “The Role of IRAs in U.S. Households’ Saving for Retirement.” Research Fundamentals, vol. 17, no. 1 (Washington, D.C., Investment Company Institute, January 2008). The demographic and financial information of IRA owners come from the May 2007 IRA Owners Survey. The 599 randomly selected respondents are representative of U.S. households owning traditional or Roth IRAs. The standard error for the total sample is ± 4 percentage points at the 95 percent confidence level. The Investment Company Institute (ICI) used the American Association for Public Opinion Research #4 method to calculate its response rate and believes it achieved a response rate in line with comparable industry surveys. ICI is a national association of U.S. investment companies, including mutual funds, closed-end funds, exchange-trade funds, and unit investment trusts. Its research department collects and disseminates industry statistics, and conducts research studies relating to issues of public policy, economic and market developments, and shareholder demographics. “The U.S. Retirement Market, Second Quarter 2007.” Research Fundamentals, vol. 16, no. 3-Q2 (Washington, D.C., Investment Company Institute, December 2007). The information on total IRA market assets comes from tabulations of total IRA assets provided by the IRS SOI for tax years 1989, 1993, and 1996 through 2004. The tabulations are based on a sample of IRS returns. See information above for a description of ICI. Holden, Sara and Michael Bogdan. “Appendix: Additional Data on IRA Ownership in 2007.” Research Fundamentals, vol. 17, no. 1A (Washington, D.C., Investment Company Institute, January 2008). Information on the number of households owning IRAs is based on data from the U.S. Bureau of the Census Current Population Reports. See information above for a description of ICI. Sailer, Peter, Victoria L. Bryant, and Sara Holden, Internal Revenue Service, “Trends in 401(k) and IRA Contribution Activity, 1999-2002 – Results from a Panel of Matched Tax Returns and Information Documents.” (Washington, D.C., 2005). This study is based on SOI’s database of over 71,000 individual taxpayers who filed for tax years 1999 through 2002. The analysis is limited to those taxpayers who filed for all 4 years in the study. The weighted file represents 143.2 million taxpayers, or about 81 percent, of the original 177 million who filed for 1999. West, Sandra and Victoria Leonard-Chambers. “The Role of IRAs in Americans’ Retirement Preparedness.” Research Fundamentals, vol. 15, no. 1 (Washington, D.C., Investment Company Institute, January 2006). The demographic and financial information of IRA owners come from the May 2005 survey of 595 randomly selected representative U.S. households owning IRAs, including traditional IRAs, Roth IRAs, Savings Incentive Match Plans for Employees (SIMPLE), Simplified Employee Pensions (SEP), and Salary Reduction Simplified Employee Pension (SAR-SEP) IRAs. The standard error for the total sample is ±4 percentage points at the 95 percent confidence level. ICI used the American Association for Public Opinion Research #4 method to calculate its response rate and believes it achieved a response rate in line with comparable industry surveys. See information above for a description of ICI. To describe barriers that may discourage employers from offering employer-sponsored and payroll-deduction IRAs, we interviewed retirement and savings experts, including individuals representing public policy research organizations, small business member organizations, consumer and employee advocacy groups, financial industry associations, IRA service provider companies and a pension professional member association. We also interviewed officials at Labor and IRS to gather the perspective of officials of federal agencies with responsibility for payroll- deduction and employer-sponsored IRAs. In our interviews with these experts, we gathered information on challenges that small employers face in offering IRAs to their employees and challenges that employees face in participating in IRAs. In these interviews, we also gathered information on proposals that exist to encourage employers to offer and employees to participate in IRAs. In addition, we reviewed available economics literature and research conducted by federal agencies, public policy organizations, and academic researchers on the factors affecting employer sponsorship of and employee participation in IRAs and other retirement savings plans. To describe how the IRS and Labor oversee IRAs and to assess the adequacy of oversight and information on employer-sponsored and payroll-deduction IRAs, we obtained and reviewed information about Labor’s and IRS’s oversight practices and responsibilities regarding IRAs. To accomplish this, we interviewed Labor and IRS officials about the steps they take to monitor IRA plans. However, we did not assess the effectiveness of IRS and Labor compliance and enforcement efforts. We also reviewed the agencies’ statutory responsibilities in the Internal Revenue Code and the Employee Retirement Income Security Act of 1974 (ERISA) for overseeing IRAs. We analyzed Labor and IRS oversight processes to identify any gaps that may exist. We conducted this performance audit from September 2007 through May 2008 in accordance with generally accepted government auditing standards, which included an assessment of data reliability. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Tamara Cross, Assistant Director; Raun Lazier; Susan Pachikara; Matt Barranca; Joseph Applebaum, Susan Aschoff; Doreen Feldman; Edward Nannenhorn; MaryLynn Sergent; Roger Thomas; Walter Vance; and Jennifer Wong made important contributions to this report.
Congress created individual retirement accounts (IRAs) with two goals: (1) to provide a retirement savings vehicle for workers without employer-sponsored retirement plans, and (2) to preserve individuals' savings in employer-sponsored retirement plans. However, questions remain about IRAs' effectiveness in facilitating new, or additional, retirement savings. GAO was asked to report on (1) how IRA assets compare to assets in other retirement plans, (2) what barriers may discourage small employers from offering IRAs to employees, and (3) the adequacy of the Internal Revenue Service's (IRS) and the Department of Labor's (Labor) oversight of and information on IRAs. GAO reviewed reports from government and financial industry sources and interviewed experts and federal agency officials. Individual retirement accounts, or IRAs, hold more assets than any other type of retirement vehicle. In 2004, IRAs held about $3.5 trillion in assets compared to $2.6 trillion in defined contribution (DC) plans, including 401(k) plans, and $1.9 trillion in defined benefit (DB), or pension plans. Similar percentages of households own IRAs and participate in 401(k) plans, and IRA ownership is associated with higher educational and income levels. Congress created IRAs to provide a way for individuals without employer plans to save for retirement, and to give retiring workers or those changing jobs a way to preserve retirement assets by rolling over, or transferring, plan balances into IRAs. Rollovers into IRAs significantly outpace IRA contributions and account for most assets flowing into IRAs. Given the total assets held in IRAs, they may appear to be comparable to 401(k) plans. However, 401(k) plans are employer-sponsored while most households with IRAs own traditional IRAs established outside the workplace. Several barriers may discourage employers from establishing employer-sponsored IRAs and offering payroll-deduction IRAs to their employees. Although employer-sponsored IRAs were designed with fewer reporting requirements to encourage participation by small employers and payroll-deduction IRAs have none, millions of employees of small firms lack access to a workplace retirement plan. Retirement and savings experts and others told GAO that barriers discouraging employers from offering these IRAs include costs that small businesses may incur for managing IRA plans, a lack of flexibility for employers seeking to promote payroll-deduction IRAs to their employees, and certain contribution requirements of some IRAs. Information is lacking, however, on what the actual costs to employers may be for providing payroll-deduction IRAs and questions remain on the effect that expanded access to these IRAs may have on employees. Experts noted that several proposals exist to encourage employers to offer and employees to participate in employer-sponsored and payroll-deduction IRAs, however limited government actions have been taken. The Internal Revenue Service and Labor share oversight for all types of IRAs, but gaps exist within Labor's area of responsibility. IRS is responsible for tax rules on establishing and maintaining IRAs, while Labor is responsible for oversight of fiduciary standards for employer-sponsored IRAs and provides certain guidance on payroll-deduction IRAs, although Labor does not have jurisdiction. Oversight ensures the interests of the employee participants are protected, that their retirement savings are properly handled, and any applicable guidance and laws are being followed. Because there are very limited reporting requirements for employer-sponsored IRAs and none for payroll-deduction IRAs, Labor does not have processes in place to identify all employers offering IRAs, numbers of employees participating, and employers not in compliance with the law. Obtaining information about employer-sponsored and payroll-deduction IRAs is also important to determine whether these vehicles help workers without DC or DB plans build retirement savings. Although IRS collects and publishes some data on IRAs, IRS has not consistently produced reports on IRAs nor shared such information with other agencies, such as Labor. Labor's Bureau of Labor Statistics National Compensation Survey surveys employer-sponsored benefit plans but collects limited information on employer-sponsored IRAs and no information on payroll-deduction IRAs. Since IRS is the only agency that has data on all IRA participants, consistent reporting of these data could give Labor and others valuable information on IRAs.
A federal position dedicated to overseeing security at commercial airports was first established in 1990 under the Federal Aviation Administration and was later transferred to TSA. The Federal Aviation Administration established the position of Federal Security Manager pursuant to a mandate in the Aviation Security Improvement Act of 1990. Federal Security Managers, responsible for security at the nation’s largest airports, developed airport security plans in concert with airport operators and air carriers; provided regulatory oversight to ensure security measures were contained in airport plans and were properly implemented; and coordinated daily federal aviation security activities, including those with local law enforcement. According to TSA officials, regional civil aviation security field offices, headed by Civil Aviation Security Field Officers and staffed with security inspectors, had been in place at commercial airports since the mid-1970s and eventually covered the more than 440 commercial airports required to have security programs. In practice, the field office staff performed compliance and enforcement inspections and assessed penalties, while the Federal Security Managers served in a liaison and coordination role as on-site security experts. To avoid duplication of effort, Civil Aviation Security Field Officers were not assigned responsibilities at airports where Federal Security Managers were designated or stationed. In November 2001, shortly after the terrorist attacks of September 11, 2001, the President signed the Aviation and Transportation Security Act (ATSA) into law, shifting certain responsibilities for aviation security from air carriers to the federal government and the newly created TSA. Specifically, ATSA created TSA and granted it direct operational responsibility for, among other things, passenger and checked baggage screening. On February 17, 2002, pursuant to ATSA, TSA assumed responsibility from FAA for security at the nation’s commercial airports, including FAA’s existing aviation security programs, plans, contracts, regulations, orders, directives, and personnel. On February 22, 2002, FAA and TSA jointly published a final rule transferring the civil aviation security regulations from FAA to TSA and amending those rules to comport with ATSA and enhance security as required by the act. According to TSA officials, DOT and TSA leadership administratively changed the name of the Federal Security Manager to Federal Security Director to avoid confusion with the liaison role of the Federal Security Manager prior to September 11. The FSD role was more comprehensive and had responsibilities that included overseeing passenger and baggage screening. Airport operators retained responsibility for the security of the airport operating environment, that is, perimeter security, access control to secured areas, and other measures detailed in the approved airport security plan, while the FSD provided regulatory oversight over these efforts. FSDs report to one of five Area Directors, based on their geographic regions, on administrative matters. However, they report to TSA headquarters (the Aviation Security Program Office and Transportation Security Operations Center) on operational issues, such as reporting security incidents. FSDs are part of the Aviation Security Program Office within TSA’s Office of Intermodal Programs, as shown in figure 1. The Aviation Security Program Office focuses on specific functions related to TSA’s Aviation Security Program, including staffing, training, and equipping the federal security work force. The Transportation Security Operations Center serves as a single point of contact for security-related operations, incidents, or crises in aviation and all surface modes of transportation. FSDs are to report any security incident at their airport immediately to the center, which is to provide guidance, if needed, as well as look for patterns among all incidents that occur throughout the country. The center provides FSDs daily intelligence briefings based on incident information from FSDs and information from TSA’s Transportation Security Intelligence Service. The Transportation Security Intelligence Service provides FSDs, Deputy FSDs, and Assistant FSDs with a classified Daily Intelligence Summary containing the most current threat information from the intelligence community, law enforcement agencies, and stakeholders and provides the FSD staff with an unclassified TSA Field Intelligence Summary to be used in briefing screeners and screening management about current threats and other issues related to aviation security. TSA’s Area Directors are responsible for monitoring and annually assessing the performance of FSDs. FSD performance is to be assessed in terms of successful accomplishment of organizational goals as well as specific performance metrics associated with aviation security within the FSD’s area of responsibility. Area Directors are required to follow DHS’s performance management guidance for FSDs who are part of the Transportation Senior Executive Service (TSES) and TSA’s performance management guidance for FSDs who are not part of the TSES (non-TSES). According to TSA Human Resources officials, about one-third of the FSDs are part of the TSES, and they are generally assigned to larger airports. FSDs are responsible for overseeing security operations at the nation’s commercial airports—443 airports as of January 2005—which TSA classifies in one of five airport security categories (X, I, II, III, IV). These categories are based on various factors such as the total number of takeoffs and landings annually, the extent to which passengers are screened at the airport, and other special security considerations. In general, category X airports have the greatest number of passenger boardings and category IV airports have the fewest. These airports can vary dramatically, not just in passenger and flight volume, but in other characteristics, including physical size and layout. Figure 2 identifies the number of commercial airports by airport security category, as of January 2005. TSA had 157 FSD positions at commercial airports nationwide, as of January 2005. Although an FSD is responsible for security at every commercial airport, not every airport has an FSD dedicated solely to that airport. Most category X airports have an FSD responsible for that airport alone. Other airports are arranged in a “hub and spoke” configuration, in which an FSD is located at or near a hub airport but also has responsibility over one or more spoke airports of the same or smaller size that are generally located in geographic proximity. At spoke airports, the top- ranking TSA official located at that airport might be a Deputy FSD, Screening Manager, or even Screening Supervisor, although the FSD has overall responsibility for the airport. Figure 3 identifies the number of FSDs responsible for specific numbers of airports. For example, figure 3 shows that 44 FSDs are responsible for a single airport, 37 are responsible for two airports (one hub and one spoke), and 1 is responsible for nine airports (one hub and eight spokes). A Screening Manager is responsible for individuals at screening checkpoints and maintains communication with supervisors regarding any issues that might reveal a weakness or vulnerable area of security screening that is discovered during the course of screening duties. A Screening Supervisor is responsible for supervising personnel performing preboard security screening of persons and their carry-on and checked baggage. with, approved security plans and directives pertaining to airport and aviation security. These responsibilities include a key function of the oversight of airport compliance with regulatory requirements and security measures contained in approved security plans and security directives. Assistant FSDs for Screening are responsible for passenger and baggage screening and managing all screener staff, and Assistant FSDs for Operations are responsible for managing nonscreening operations (e.g., exercise planning and execution, crisis management, and vulnerability assessments) and designated aspects of administrative support. An FSD responsible for a large airport may also have a Deputy FSD, and that position could be located at a hub airport where the FSD is located or at a spoke airport. Other FSD management staff positions vary by airport and airport size, but may include a Stakeholder Manager, Customer Support Manager, Training Coordinator, Human Resource Specialist, Financial Specialist, Scheduling Operations Officer, Screening Supervisors and Managers, administrative support personnel, as well as other positions. TSA developed guidance that describes the many roles and responsibilities of the FSD position, most of which is associated with securing commercial airports from terrorist threats. However, its guidance addressing FSD authority is outdated and does not clearly describe the FSDs’ authority relative to other airport stakeholders during a security incident. Furthermore, some of the stakeholders at airports we visited said that the FSDs’ authority relative to others was not always clear during a security incident, and that the FSDs’ authority in such cases had not been communicated to them. Most of the 25 FSDs we interviewed by telephone said that TSA needed to do more to clarify the roles and responsibilities of the FSD position for the benefit of FSDs and stakeholders, with the majority of these FSDs stating that their authority needed further clarification. The FSD is the ranking TSA authority responsible for the leadership and coordination of TSA security activities at the nation’s commercial airports. As such, the FSD is responsible for providing day-to-day operational direction for federal security at the airport or airports to which the FSD is assigned. ATSA established broad authorities of the FSD, while specific responsibilities of the FSD are laid out in TSA Delegation Orders, the FSD position description, and TSA’s 2004 Executive FSD Guide, and include the following: Overseeing security screening of passengers, baggage, and air cargo. FSDs are responsible for providing direct oversight of passenger and baggage screening by managing the local screening force, which is typically composed of federal employees. To carry out this responsibility, FSDs engage in activities that include ensuring implementation of required screener-training programs, anticipating and preparing for training on new screening technologies and procedures, and developing local training initiatives to test and improve screener performance. In accordance with regulations, aircraft operators perform their own security screening of air cargo, and FSDs are responsible for overseeing operators’ performance in implementing required security measures. Providing regulatory oversight of all U.S. air transportation facilities and operations. FSDs are responsible for ensuring that airports, airlines (foreign and domestic), air cargo carriers, and indirect air carriers comply with TSA regulations and security directives governing such things as perimeter security, access controls, procedures for challenging questionable identification documents, aircraft searches, and general security procedures. This is accomplished through administering appropriate compliance and enforcement actions with the goal of discovering and correcting deficiencies and vulnerabilities in aviation security. FSDs oversee civil enforcement activities at their airports involving findings of noncompliance with security requirements by airlines, airports, and individuals, including passengers. To carry out their regulatory oversight responsibilities, FSDs and staff engage in activities that include conducting stakeholder meetings with all regulated parties to discuss regulatory changes or educate them on current aviation threats. Analyzing and addressing security threats. FSDs are responsible for conducting analyses of security threats and vulnerabilities in and around their airports. To carry out this responsibility, FSDs seek intelligence from sources external to TSA, build systems to analyze the information received from intelligence organizations and apply it to the local airport security, and direct TSA regulatory agents to test security measures and procedures and identify potential security weaknesses. Building and managing relationships with airport stakeholders. FSDs are responsible for building and managing relationships with local stakeholders (e.g., airport management, airlines, and concessionaires) to ensure that security operations run smoothly. To carry out this responsibility, FSDs engage in activities that include collaborating with airlines to identify and resolve issues of efficient passenger flow and customer service while maintaining security standards. FSDs also coordinate with airport and airline management; federal, state, and local governments; law enforcement agencies; and relevant private sector entities to organize and implement a Federal Security Crisis Management Plan at each airport. The plan is essentially a protocol for what TSA employees and airport stakeholders should do in the event of an emergency, including a terrorist incident, within the airport. Other FSD responsibilities include communicating information received from TSA headquarters to appropriate stakeholders, maintaining quality customer service for airlines and passengers, providing leadership to the TSA employee population, managing and coordinating their direct staff, and overseeing management of TSA facilities and equipment resources. In addition, TSA has directed FSDs to conduct outreach and liaison with the general aviation community in their areas, although it has not given FSDs regulatory oversight responsibility over general aviation airports. FSDs’ roles and responsibilities have been fairly well documented, but their authority relative to other airport stakeholders during security incidents has not been clearly defined. Section 103 of ATSA addressed FSD authority at the broadest level by giving FSDs responsibility for overseeing the screening of passengers and property and for carrying out any other duties prescribed by the TSA Administrator. TSA’s Executive FSD Guide, discussed earlier, describes FSD responsibilities, but it does not address the FSDs’ authority in security incidents. That authority is addressed more specifically in TSA’s June 2002 Delegation of Authority to Federal Security Directors (Delegation Order), which gives FSDs the authority to provide for overall security of aviation, including the security of aircraft and airports and related facilities to which they are assigned. The Delegation Order is outdated in that it gives FSDs the authority to train, supervise, equip, and deploy a TSA law enforcement force that was never established. Officials from TSA’s Aviation Security Program Office acknowledged that the document is outdated and has not been updated since FSDs were first assigned to airports. According to officials from TSA’s Office of Law Enforcement, TSA originally envisioned that all FSDs would be federal law enforcement officers (e.g., GS-1811—criminal investigators) and would command a TSA police force. However, the force was never established, and FSDs were not given federal law enforcement status. TSA has assigned an Assistant FSD for Law Enforcement to about half the FSDs in the country, but this is the only law enforcement position on their staff. Instead, airport police or state or local law enforcement agencies primarily carry out the law enforcement function at airports. Furthermore, the Delegation Order does not clearly address the extent of FSD authority relative to other parties with responsibilities related to airport security, including law enforcement agencies. For example, the Delegation Order gives the FSD authority to clear, close, or otherwise secure facilities under certain circumstances, and after taking such action, requires the FSD to provide feedback to the airport operator on the reasons the security action was taken. The document also provides that, under certain circumstances, the FSD has the authority to cancel, delay, return, or divert flights and search and detain persons or property. However, it does not clearly address what authority, if any, FSDs have over other parties, such as airport law enforcement personnel, on which it would need to rely to take these actions. In August 2005, TSA officials told us that they had drafted a revised Delegation Order that clarified the authority of FSDs and that it is was being reviewed internally. They stated that the revised document restates some of the FSDs’ previous authority and provides for some new ones, such as entering into interagency agreements. Stakeholders at some of the airports we visited told us that the FSDs’ role, particularly regarding their authority relative to other parties, was not sufficiently clear, and at least one stakeholder at every airport we visited said such information had never been communicated to them. At three of the seven airports, stakeholders said that aspects of the FSD’s authority during a security incident lacked clarity. For example, at two airports, confusion or conflicting opinions developed over whether the FSD had the authority to take certain actions during particular security incidents. Furthermore, six stakeholders at two of the airports we visited were also unclear about the FSD’s authority regarding control over airport law enforcement personnel and canine teams, access to secured information, and specific operational changes. Additionally, at least one stakeholder at each of the seven airports we visited said he or she had never been briefed or given information on the role of the FSD. Among these stakeholders was an airport manager who said he had specifically sought out documents detailing the FSD’s roles and authority, including how the FSD would fit into the airport’s incident command system. At another airport, airport management officials said they had to take the initiative, in conjunction with the FSD and law enforcement stakeholders, to develop a matrix identifying first responders and the lead agency for various types of incidents after a potential hijacking situation highlighted the need to document and share such guidance. Several stakeholders at the national level also raised questions regarding the clarity of the FSD’s authority relative to that of other parties, including FSDs’ authority in particular security incidents. Specifically, FBI headquarters officials and representatives of two industry associations representing airports and airport law enforcement officials voiced concern about the clarity of FSDs’ authority, noting that initially some of the first FSDs attempted to assert control over airport stakeholders, such as the airport police department. FBI headquarters officials were concerned, on the basis of past airport exercises, that relationships between FSDs and the FBI had not been explicitly delineated. Officials stated that if a conflict with local FBI authorities occurred during an actual security incident, it might create confusion and result in a longer response time. As of October 2004, FBI headquarters officials informed us that the FBI was attempting to enter into a memorandum of understanding with TSA to clarify certain aspects of each agency’s authority. TSA officials said that, as of August 2005, TSA and the FBI had not entered into a memorandum of understanding and were not able to provide us any additional information on this issue. Our telephone interviews with selected FSDs also indicated a need for a clearer statement of their authority. Most (18) of the 25 FSDs we interviewed said, to varying degrees, that TSA needed to do more to clarify the role and responsibilities of the FSD position—not just for the benefit of FSDs and their staff, but for the benefit of airport stakeholders as well. (These and other responses to selected questions we posed during our interviews with 25 FSDs are contained in app. II.) More specifically, when we asked those 18 FSDs what needed further clarification, 11 said that their authority needed to be further defined. Among these 11 were 6 FSDs who believed TSA should develop a document that delineates the authority of the position or update the Delegation Order. For example, FSDs told us that other agencies do not understand the authority of the FSD or TSA, and have asked for a document to be made widely available to federal agencies, state and local law enforcement, emergency responders, and other airport stakeholders. Four FSDs explained that clarification of the FSDs’ authority is needed with respect to critical incident response. TSA does not charge FSDs with responsibility for developing TSA aviation security policy. However, TSA does expect FSDs to provide input on draft policy from TSA headquarters when called upon and to recommend policies and procedures for addressing emerging or unforeseen security risks and policy gaps. According to TSA officials, the agency provides several opportunities for some FSDs to be involved in developing some TSA aviation security policies through the FSD Advisory Council, ad hoc consultation groups, and the piloting of new security procedures and technology. The FSD Advisory Council provides a mechanism for selected FSDs to be involved in TSA’s efforts to develop aviation security policy, according to TSA officials. The FSD Advisory Council was originally established as a way for the Aviation Security Program Office to conduct outreach among the FSDs. However, in May 2004, the TSA Administrator recast the council as an advisory board reporting directly to him and, for the most part, responding to his agenda items. The council consists of 22 FSDs who the Administrator selects based on factors such as geographic location, airport security category, and strong FSD leadership, according to a TSA official responsible for council coordination. Most FSDs do not serve on the council for more than 1 year, but their term is ultimately left to the Administrator’s discretion. Council meetings occur over a 3-day period in Washington, D.C., generally on a monthly basis. According to TSA officials, during council meetings, the FSDs provide the Administrator their opinions and guidance on establishing and modifying TSA policies and procedures and have opportunities for input in other areas. Four of the five FSDs at airports we visited, including two who were council members, saw the council as an effective way for the Administrator to gather input on new TSA policy initiatives and issues confronting FSDs. The fifth FSD commented that most of the issues discussed by the council appeared to be more relevant to airports larger than his. On occasion, some FSDs have the opportunity to provide input on draft TSA aviation security policy through ad hoc consultation groups organized by the Aviation Security Program Office, according to TSA officials. For instance, when TSA establishes a new standard operating procedure, it typically consults a selected group drawn from, perhaps, 9 or 10 airports. These groups are ad hoc and may include different combinations of FSDs, FSD staff, and airport stakeholders. For example, TSA formed a group of FSDs, screeners, and airport and air carrier staff from multiple airports to address anticipated increases in the 2004 summer travel season. According to the TSA officials, TSA typically consults such groups on most significant policy developments. However, the more urgent or sensitive a new policy, the less likely TSA will have time to obtain input outside of headquarters. The official stated that TSA does not involve every FSD in every policy it develops but added that he could not think of any policy in the last 6 months that had not involved at least some FSDs in its development. Participating in pilots of new technology and procedures at their airports is another way FSDs can be involved in developing TSA aviation security policy. TSA has a variety of ongoing pilot programs that it generally characterizes as either technology- or procedure-based. For example, TSA has tested and evaluated at multiple airports a technology pilot—the Explosive Trace Detection Portal Program—that is designed to analyze the air around a passenger for traces of explosive material. TSA’s procedure- based pilots include the Registered Traveler Program, which identifies participating travelers through biometric identifiers, such as fingerprints, and helps to expedite these passengers through required security screening for weapons and explosives. In addition, TSA has piloted other program initiatives, such as its Next Generation Hiring Program, which TSA reported provides a more localized approach to screener hiring that enables FSDs to influence the hiring process for their airports. TSA first piloted this initiative at Boston Logan International Airport and gradually expanded testing to other airports, continuing to make changes before implementing the program nationwide. Not all FSDs or their airports have been involved in piloting new technologies and procedures. According to TSA headquarters officials, TSA decided to limit the airports at which it conducts these types of pilots to a selected group of “model” airports, although it does conduct pilots not involving technology or procedures at other airports. As such, in December 2004, in an effort to streamline the airport selection process for technology pilots, TSA identified 15 airports and recommended they be used for such pilots on an ongoing basis. According to these officials, the selected airports provide diversity in geography, demographics, and baggage and materials to be screened. Ten of the 25 FSDs we interviewed said TSA had offered their airports opportunities to pilot a new program or technology (collectively, more than 20 such opportunities), and all of them subsequently participated. Although TSA officials told us that opportunities exist for some FSDs to be involved in developing TSA aviation security policy, most of the FSDs (21 of 25) who we interviewed characterized themselves as not involved in developing such policy. Three of the five FSDs at airports we visited suggested that TSA should consult FSDs on security policies before issuing them, although some noted time may not permit this on urgent security measures. Two of these FSDs said it would be helpful if TSA allowed FSDs a comment period for new policy, and another said that because TSA does not involve FSDs in developing policy, its weekly national conference calls with FSDs are filled with questions and discussions about new security directives. FSDs reported they entered into these partnerships at the seven airports we visited, and FSDs and stakeholders stated that these partnerships were generally working well. Furthermore, FSDs initiated communication and coordination efforts with stakeholders or were involved in efforts already established—such as meetings and briefings—to address a range of issues, including airport security, operations, and coordination. As discussed earlier, TSA has given FSDs responsibility for building and managing relationships with airport stakeholders and has generally left it to the FSDs to determine how to develop effective stakeholder relationships. According to TSA’s Executive FSD Guide, building and maintaining stakeholder partnerships is a major responsibility of FSDs, and these partnerships can create capabilities at airports where the sum is greater than the parts. TSA further reinforces the importance of FSDs’ building and managing partnerships by including this activity as a standard rating element on their annual performance assessments. TSA addressed the importance of partnerships in connection with planning for increased passenger traffic during the summer months of 2004 in its best practice guide—the Aviation Partnership Support Plan. This document recognized the need for FSDs and airport stakeholders to work together toward achieving security and customer service. For example, the plan addressed the importance of TSA and air carrier station managers working together to identify a process for communicating, handling, and destroying sensitive passenger load data, and it encouraged FSDs to develop formal working groups to bring together local stakeholders. According to parties at the airports we visited and TSA guidance, developing partnerships with airport stakeholders is essential for FSDs to effectively do their job. First, according to FSDs, FSD staff, and law enforcement stakeholders at the airports we visited, FSDs lack law enforcement personnel to respond to a security incident and, therefore, must rely on federal, state, and local law enforcement agencies in these instances. TSA also recognizes that, for example, FSDs would have to work with the FBI and other law enforcement agencies to respond to a security incident on an aircraft where the door has been closed for embarkation, because FSDs do not have the resources needed to respond to such an incident. Second, developing partnerships can provide benefits to FSDs and airport stakeholders. For example, FSDs need air carrier data on the number of passengers transiting airport checkpoints to appropriately schedule screeners. At the same time, air carriers seek an efficient screening process to minimize wait times for their customers. Various parties we interviewed, including airport stakeholders, BTS and FBI officials, and an industry representative, recognized the importance of partnerships in helping the airport operate smoothly. For example, one industry representative said that airport management needs security and threat information from the FSD, and the FSD needs to understand nonsecurity issues that affect the FSD’s job, such as an upcoming local event that may increase passenger traffic. FSDs and most of the stakeholders at the seven airports we visited said that they had developed partnerships, and they described these partnerships as generally working well. The FSDs responsible for these airports reported having positive partnerships with airport stakeholders. More specifically, one FSD said that having common goals with stakeholders, such as ensuring security, enhanced their partnerships. Another FSD saw himself as a catalyst for partnerships at his airport and as a facilitator among stakeholders who did not always get along. At most of these airports, stakeholders also reported that FSD-stakeholder partnerships were working well and identified examples of successful practices. Some spoke of the value of an FSD being accessible to stakeholders to help resolve problems by, for example, being visible at the airport and maintaining an open-door policy. Seven stakeholders stated that the FSDs at their airports discussed TSA security directives and worked with them when it was not clear how to interpret or implement them. At one airport, the FSD, airport management, and air carriers teamed together to look for opportunities to enhance security and customer service. To this end, they formed a working group and developed a proposal for TSA that addressed issues involving technology, infrastructure, transportation assets, and local budgetary control for the FSD. Finally, at another airport, in an effort to manage stakeholders’ concerns about wait times and customer service, the FSD arranged for staff to help screen all of the airport vendors and concessionaires, as required, but at an established time to ensure passengers were minimally affected. Partnerships at airports across the country were generally working well or better at the time of our review than when TSA first assigned FSDs to airports, according to several federal agency officials and industry representatives at the national level. Some airport stakeholders and industry representatives stated that some FSDs’ authoritative management style and lack of airport knowledge contributed to tensions in earlier FSD- stakeholder relationships. However, during the course of our review, TSA officials said they received very few complaints about FSDs from airport stakeholders, and industry representatives and officials from BTS (which oversees CBP and ICE), and the FBI said that partnerships were generally working well or had improved. For example, FBI officials had queried 27 of their Airport Liaison Agents in October 2004 about their relationships with FSDs, and 20 of the 22 agents who responded characterized these relationships as generally good. FBI officials told us that at one airport where coordination and partnerships stood out as being particularly strong, the FSD met with stakeholders every morning. TSA established 80 Assistant FSD for Law Enforcement positions across the country to help FSDs partner and act as liaison with law enforcement stakeholders and to conduct certain criminal investigations. This position is always filled by a federal law enforcement officer (a criminal investigator), and is the only law enforcement officer assigned to an FSD. Office of Law Enforcement officials stated that this position is essential for interacting with local law enforcement stakeholders, and they would like to see every FSD have at least one Assistant FSD for Law Enforcement and more than one at larger airports. Assistant FSDs for Law Enforcement report directly to their respective FSDs, and at smaller airports without this position, the FSD takes on responsibility for coordinating with law enforcement stakeholders. Given the number of positions authorized, not all FSDs have Assistant FSDs for Law Enforcement on their staff. Of the 25 FSDs we interviewed, 13 reported having this position on their staff, and 12 reported not having this position. Regardless of whether these FSDs had this position, almost all (23) said it was important to have the position on their staff to coordinate with the law enforcement and intelligence community and perform criminal investigations. An Assistant FSD for Law Enforcement explained during one airport visit that his familiarity with legal processes and procedures facilitated his working relationship with the FBI and U.S. Attorneys. FBI headquarters officials also reported that the Assistant FSD for Law Enforcement position has helped improve coordination between TSA and the FBI at airports. TSA did not provide an agency-level position on whether every FSD needs an Assistant FSD for Law Enforcement. Although most of our contacts reported that partnerships between FSDs and airport stakeholders were generally working well, about half (13) of the 25 FSDs we interviewed said that it is challenging to foster partnerships with the parties they are responsible for regulating. Several FSDs stated that while it may be hard to partner with those one regulates, having good communication and relationships with stakeholders and a mutual understanding of the responsibility of regulating airport security makes such partnering possible. According to officials from TSA’s Office of Compliance Programs, the office has articulated a policy of compliance through cooperation, which has helped FSDs foster partnerships with airport stakeholders while achieving TSA’s regulatory oversight mission. For example, TSA established a Voluntary Disclosure Program that allows stakeholders to forgo civil penalty actions by bringing violations to the attention of TSA and taking prompt corrective action. The philosophy behind this program is that aviation security is well served by providing incentives to regulated parties to identify and correct their own instances of noncompliance and to invest more resources in efforts to preclude their recurrence. According to Office of Compliance Program officials, 75 percent of issues of noncompliance were closed by administrative action rather than civil enforcement during the past 2 fiscal years. Furthermore, in half the cases reported, FSDs were able to address the discovered security gaps and close the issue with a note to the inspection files, instead of writing a formal investigation report. At one airport we visited, not all stakeholders agreed that partnerships with the FSD were working well. Airport management, airport law enforcement, and air carriers at this airport said the FSD was not accessible, often did not attend meetings to which he had been invited, and sometimes did not send FSD staff to meetings in his place. These stakeholders also criticized the FSD for not distributing security directives and meeting to discuss their implementation. However, local federal stakeholders at this airport (representing the FBI, CBP, and ICE) said that the FSD had established positive partnerships with them and had communicated well. According to TSA’s Executive FSD Guide, FSDs are responsible for conducting group or one-on-one meetings with airport managers and air carriers. FSDs and stakeholders at all seven of the airports we visited told us that they were involved with these and other communication and coordination efforts. FSDs and stakeholders described a variety of such mechanisms, including meetings and training exercises, noting that many of these were in place before FSDs were assigned to airports. A BTS official explained that at larger airports, FSDs inherited coordination mechanisms and relationships established between federal agencies and other stakeholders. In contrast, at smaller airports, FSDs had to educate stakeholders on involving and communicating more with federal officials. At two of the larger airports we visited, stakeholders said that the FSDs initiated communication and coordination efforts on their own, such as holding routine intelligence briefings and meetings with law enforcement agencies and representatives of U.S. Attorneys’ Offices. Aside from the more formal communication and coordination mechanisms, FSDs and some of the stakeholders at all seven airports we visited said they frequently shared information and developed partnerships informally through telephone calls, e-mails, and face-to-face interactions. At all of the airports we visited, FSDs and stakeholders reported that meetings to discuss improvements to airport security and operations and coordination meetings were held, although the type of participants and frequency of these meetings varied. FSDs and stakeholders reported that some of these meetings were held on a weekly, monthly, or quarterly basis, while others were held on an impromptu basis when FSDs or stakeholders had an issue to discuss. According to an FBI official, most of the Airport Liaison Agents they had queried were having monthly meetings with their FSDs. Similarly, a BTS official said that all FSDs had monthly meetings with representatives from other BTS agencies (ICE and CBP) to improve coordination of law enforcement and security efforts among these agencies at airports. Although five of the seven airports we visited had standing formal meetings, two of the smaller airports did not. Rather, at these airports, the FSD and stakeholders reported interacting daily and holding meetings on an as-needed basis. In addition to meetings, incident debriefings and training exercises to ensure a coordinated response in the event of a security incident were conducted at most of the airports we visited. Stakeholders at three of the airports mentioned that debriefings occurred after an actual incident to address questions and discuss how the incident had been handled. For example, at one airport, a stakeholder explained that a debriefing helped alleviate concerns he had regarding his lack of involvement during a particular incident. According to TSA, response to an actual event is typically only as good as the training for it; hence, TSA requires FSDs to hold quarterly training exercises at their airports. Training exercises included tabletop simulation exercises, hijacking scenarios, and Man Portable Air Defense Systems (MANPADS) vulnerability assessments to identify areas where a MANPADS attack could be launched. Sometimes protocols or security directives are written as a result of airport incidents and debriefings. At all seven airports we visited, protocols for responding to incidents existed, according to FSDs, their staff, or stakeholders, and at most of these airports, protocols were written into the Airport Security Plan. However, a TSA headquarters official explained that a protocol cannot exist for every possible incident, given that security incidents are often unique. TSA has made a number of changes intended to provide FSDs with more authority and flexibility in carrying out their responsibilities, and most FSDs we interviewed responded favorably to these changes. In addition, TSA was planning additional efforts during our review that could affect FSDs, and the majority of the 25 FSDs we interviewed said they were not involved in these efforts. To further support or empower the FSD position, TSA increased FSDs’ authority to address performance and conduct problems, established a local hiring initiative, increased flexibility to provide screener training, relocated Area Director positions to the field, and established a report group and a mentoring program. The majority of FSDs we interviewed had positive views of most of these changes. Local hiring initiative. TSA developed a local screener hiring initiative that, among other things, vested more hiring authority with FSDs to address airport staffing needs. To meet a post-September 11 statutory deadline, TSA brought a workforce of 57,000 federal screeners on board within 6 months using a highly centralized approach of recruiting, assessing, hiring, and training. With this accomplished, TSA began piloting a reengineered local hiring initiative, called Next Generation Hiring, in June 2004. Its goal was to ensure the involvement of FSDs and their staff in the hiring process, streamline the process, and make the process more responsive to the full range of airport needs. The program was designed to give FSDs and their staff the flexibility to determine which aspects, or phases, of local hiring they wish to participate in, and how much contractor support they need. TSA incorporated modifications as a result of lessons learned from its pilot and initial implementation sites as it gradually rolled out this initiative to additional locations. By March 2005, TSA had established 12 fully operational local hiring centers around the country, with locations based on various factors, including geography and operational need. When we asked all 155 FSDs in our March 2004 survey if they wanted more authority in selecting screeners, 136 (88 percent) said they wanted more authority to do this to a great or very great extent, and another 9 percent said they wanted more authority in this area to a moderate extent. When we interviewed 25 FSDs during this review, approximately 1 year after TSA began rolling out the Next Generation Hiring program, 12 reported that they wanted more authority in selecting screeners to a great or very great extent, even given their participation options under Next Generation Hiring, and another 8 said they wanted more authority in this area to a moderate extent. Nevertheless, 18 of the 25 FSDs stated that Next Generation Hiring provided for their airports’ screener staffing needs better than TSA’s former hiring process to a very great, great, or moderate extent. In addition, 14 of the 25 FSDs stated that, overall, they were satisfied with the new program’s ability to meet their screener staffing needs, but 7 said they were not satisfied. Comments from those dissatisfied FSDs included statements that the contractor had not done a good job in the recruiting aspect of the process and that the new hiring process still takes too long— a comment echoed by some FSDs we interviewed during our airport visits earlier in the program’s rollout. TSA officials stated that the goal of Next Generation Hiring was not necessarily to reduce the time it takes to bring a new screener on board at every airport. Rather, the goal was to be more responsive to all local hiring needs—not just the needs of the largest airports. According to a program official, early data on Next Generation Hiring have been positive, though limited. For example, data from a nonscientific sample of several airports showed that under Next Generation Hiring, fewer screeners resigned within their first month than before the program was in place (about 18 percent resigned in the first month before Next Generation Hiring; about 7.5 percent resigned in the first month after the program was initiated at those airports). Officials also concluded, on the basis of their limited data and anecdotal information, that candidates selected at airports where the FSD and staff were conducting the hiring process were more selective in accepting offers because they had more knowledge of what the job would entail than contractors did when they handled the hiring process. Increased flexibility to provide screener training. TSA expanded FSDs’ flexibility to offer training locally to screeners in two respects in April 2004. First, TSA developed and implemented a new basic screener training program to cover the technical aspects of both passenger and checked baggage screening, and allowed FSDs to choose whether new screeners would receive instruction in one or both of these screening functions during initial training. According to TSA officials, this approach provides the optimum training solution based on the specific needs of each airport and reflects the fact that, at some airports, the FSD does not need all screeners to be fully trained in both passenger and checked baggage screening. Second, TSA offered FSDs the flexibility to deliver basic screener training using either contractors or local TSA employees as instructors, provided they have experience as instructors and are approved by TSA. Prior to TSA providing FSDs with more training flexibility, 110 of the 155 FSDs (71 percent) who responded to our March 2004 survey said that they wanted more flexibility to design and conduct local training to a great or very great extent. A year later, when we asked 25 FSDs during this review about their satisfaction with the flexibility they had in offering training locally to screeners, 21 said they were satisfied. Several noted this was an area where they had seen improvement in the flexibility TSA had given them. Increased authority to address performance and conduct problems. TSA expanded FSDs’ authority to address employee performance and conduct problems over time, beginning in 2003 when FSDs were delegated authority to suspend employees for up to 3 days. In July 2004, FSDs were delegated the authority to take the full range of disciplinary actions, including removal, in accordance with TSA policy. In September 2004, TSA again increased the authority of FSDs by allowing them to use a streamlined, one-step process in taking certain disciplinary actions, such as the termination of employment for screeners involved in theft or the use of drugs or alcohol. During our telephone interviews with FSDs, conducted more than 6 months after the last of these increases in FSD authority, 24 of the 25 FSDs said they were satisfied with their current authority to address employee performance and conduct problems. Moreover, 2 of the 5 FSDs we interviewed during our airport visits said that their increased authority in this area was an important change that exemplified TSA’s efforts to further empower FSDs. Relocation of Area Director positions. In September 2004, as part of an overall reorganization effort, TSA physically relocated its five Area Director positions from the Aviation Security Program Office in headquarters to the field. According to TSA headquarters officials, the goal was to move more TSA authority and decision making from headquarters to the field and to create efficiencies in TSA’s processes and procedures. In making this change, TSA named five existing FSDs—one in each of TSA’s five geographic areas—to assume the responsibility of being Area Directors in addition to continuing to serve as FSDs of major airports. FSDs in each of the new Area Directors’ geographic regions report to their respective Area Director on administrative matters. However, they report to TSA headquarters (the Aviation Security Program Office and Transportation Security Operations Center) on operational issues, such as reporting security incidents. To support these “dual hatted” FSDs with their additional Area Director responsibilities, TSA authorized each to hire five additional staff. The 25 FSDs we interviewed were divided on whether they thought having Area Directors in the field was helpful—12 said it was helpful and 12 said it was not helpful—and some offered comments. On one hand, several FSDs said that field-based Area Directors who were also FSDs had a much better understanding of what FSDs encounter each day. On the other hand, several said that FSDs were better served by Area Directors located at headquarters because they were more aware of everything that was taking place and had more staff to support them. Views on this topic were also mixed among the five FSDs we interviewed during our airport visits. Two Area Directors were among the 25 FSDs we interviewed, and both thought the change to field-based Area Directors was helpful but thought that the position should be further empowered. One explained that the Area Directors should be involved in operational issues in addition to administrative matters, although he would need additional staff if he also had this responsibility. The other Area Director said that, as one of only five Area Directors, he is responsible for too many airports. Report Group. In conjunction with moving the Area Director positions out of headquarters, TSA established this group in September 2004 to conduct some of the duties previously performed by Area Directors when at headquarters. It was also intended to provide operational support and a communication link between TSA headquarters and field-based Area Directors, and in turn, FSDs and their staff. The group manages and standardizes communications (including sending daily recaps of each day’s business), continually updates point-of-contact lists that identified who FSDs and their staff should contact when a problem arises, and serves as a troubleshooter for unresolved issues. For example, FSDs and their staff may call the Report Group for assistance if they have already contacted the appropriate headquarters contacts and their issue or question was not resolved. Of the 25 FSDs we interviewed, 16 considered the Report Group to be a valuable resource, and 7 said they did not consider it valuable. Although TSA established the group just prior to our airport visits, FSDs we interviewed at that time saw the potential value of the group and noted that its daily recaps were already helpful in consolidating and sharing consistent information, as were the point-of-contact lists. Mentoring Program. TSA began offering an optional mentoring experience to newly appointed FSDs and Deputy FSDs in April 2004 to support their transition into their new positions. Under this program, mentor coordinators match new FSDs and Deputy FSDs (mentoring colleagues) with more experienced counterparts (mentors) at other airports somewhat comparable in size and complexity. As TSA names new FSDs and Deputies, the coordinators offer them a choice of prescreened volunteer mentors, give participants suggested steps for proceeding with the mentoring relationship, and provide a list of frequently asked questions and answers about the program. Only 2 of the 5 FSDs we visited and 4 of the 25 FSDs we interviewed had participated in the Mentoring Program—either by being a mentor or by being mentored—and all but one saw it as having value. One FSD, who had been mentored, explained that having a mentor helped him learn a very challenging job and provided the opportunity to bounce ideas off of an experienced FSD. About half (13) of the 25 FSDs said that they were not familiar with TSA’s mentoring program. At the time we interviewed FSDs, TSA was planning the following three additional initiatives that could affect at least some FSDs. The majority of the 25 FSDs we interviewed said they were not involved in these efforts. TSA’s Screening Allocation Model. TSA has been developing a model for determining screener staffing levels after initially deploying its federal screener workforce in 2002 based on estimates of screeners performing screening functions under privatized agencies, instead of a model. In September 2003, in an effort to right-size and stabilize its screener workforce, TSA hired a consultant to conduct a study of screener staffing levels at the nation’s commercial airports. Among other things, the consultant was tasked with (1) developing a comprehensive modeling approach with appropriate details to account for the considerable variability that occurs among airports, (2) creating a staffing analysis model to be used as a management tool to determine daily and weekly staffing levels and deploying the model to commercial airports nationwide, and (3) developing user-friendly simulation software to determine optimum screener staffing levels for each commercial airport with federal screeners. In March 2004, while awaiting the completion of this model, TSA established specific airport staffing limits to meet a congressionally mandated cap for screeners set at the level of 45,000 full-time-equivalent positions. In the summer of 2004, the model was selected, developed, and deployed for airport data input. That fall, TSA officials told us they expected the model, which was being validated with airports at the time, would demonstrate TSA’s need for screeners beyond the mandated cap. FSDs we interviewed during our airport visits shared this view and the expectation that many airports would see increases in their screener allocations. In July 2005, TSA finalized and submitted to Congress its standards for determining aviation security staffing for all airports at which screening is required. The Screening Allocation Model does not give FSDs the authority to determine the number of screeners authorized for their airports, nor was it intended to do so. When asked if they would like to have greater authority in determining screener staffing levels for the airports they oversee, 23 of the 25 FSDs we interviewed answered that, to a great or very great extent, they would like greater authority. One FSD commented, for example, that there will always be a need for FSDs to have a way to adjust screener numbers and that the screener staffing system needs to have sufficient flexibility to address sudden changes in screening demands. This view was fairly consistent with what FSDs had said a year earlier in our March 2004 survey, when we posed the same question to all FSDs. At that time, 145 of 154 FSDs (94 percent) answered in the same way when asked if they wanted more authority in determining the number of screeners for their airports. Although TSA officials said that they had obtained a variety of data from FSDs during the course of the development of its Screening Allocation Model, not all of the FSDs we contacted saw themselves as having been involved in the model’s development. Of the 25 FSDs we interviewed, 14 said that TSA had not involved them or provided them with the opportunity to have input into the development of the model. Of the 14 FSDs who said they were not involved, 11 were dissatisfied regarding their lack of involvement. Furthermore, among the 11 FSDs who said they were involved in developing the model, 5 were dissatisfied regarding their level of involvement. According to TSA officials, FSDs provided information for the model regarding their respective airports, and headquarters validated the numbers the model generated for each airport. Reassessments of airport hub and spoke configurations and FSD management staff. TSA began two related reviews in June 2004: (1) a reassessment of the hub and spoke configurations of commercial airports and (2) a reassessment of the number of management and administrative positions allocated to each FSD. The hub and spoke reassessment could result in changes to the number or the specific airports for which some FSDs are responsible. According to TSA headquarters officials, TSA undertook this reassessment because some FSDs had airports in more than one state, and complexities arose when working with multiple state laws and regulations, as well as U.S. Attorneys and police departments from multiple state jurisdictions. Officials anticipated that after TSA completes its review, a few situations will continue in which FSDs have responsibility for airports in more than one state, but only when the distance between certain airports necessitates. Related to its review of hub and spoke configurations, TSA undertook a reassessment of FSD management staff levels, recognizing that some airports—typically smaller ones—were overstaffed, while others— typically larger airports—were understaffed. According to TSA officials, TSA initially distributed FSD staff based on the security classification of the airport and, to a lesser extent, the size or annual number of aircraft boardings. This, coupled with resource constraints that resulted in fewer positions being authorized than were needed, resulted in an imbalance in FSD staff among airports. Authorizations for the FSD staff positions ranged from 1 position at category III and IV airports with a minimum threshold of boardings, to 16 positions at category X and large category I airports. TSA made decisions regarding some of these positions (e.g., whether a particular FSD should be assigned a Deputy FSD or an Assistant FSD for Law Enforcement), while FSDs were left to make decisions about other positions (e.g., whether to include a Training Coordinator or a Human Resources Specialist as one of the FSD’s management staff). Although TSA made adjustments to some FSDs’ staff levels over time, officials recognized that an across-the-board reassessment was needed. The majority of the 25 FSDs we interviewed said that they were not involved in either of these two reassessment efforts, and most who were not involved were dissatisfied with their lack of involvement. Fourteen of the 25 FSDs said they had not been involved in TSA’s reassessment of airport hub and spoke configurations, and 19 of the 25 FSDs said they had not been involved the reassessment of FSD management staff levels. TSA headquarters officials said that they acknowledge the importance of FSDs’ involvement in agency planning efforts, and when practical and appropriate, TSA has attempted to obtain a broad spectrum of FSD input. They said that in conducting these two particular reassessments, they formed a team that included three FSDs and three Deputy FSDs. For FSDs to carry out their responsibilities effectively, FSDs, their staff, and airport stakeholders need a clear statement of the FSDs’ authority, relative to other stakeholders, in the event of security incidents. TSA’s primary document outlining FSDs’ authority is outdated, and neither it, nor other statements TSA has issued, delineates the authority of the FSD in various security situations relative to other parties. The absence of a clear understanding of the authority of the position has reportedly resulted in confusion during past security incidents and has raised concerns among some stakeholders at both the national and airport levels about possible ambiguity regarding FSDs’ authority during future incidents. Updating TSA’s Delegation of Authority to FSDs to clarify their authority relative to others and developing other documents, as warranted, would benefit FSDs by further enabling them to communicate and share consistent information about their authority with their staff and airport stakeholders, including law enforcement agencies. Stakeholders need to be clear on which agency has authority or lead responsibility in the event of various types of security incidents to reduce the likelihood of confusion or a delayed response. To clarify the authority of the Federal Security Director during various security incidents and help ensure a consistent understanding of the authority of FSDs among FSDs, their staff, and airport stakeholders, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for the Transportation Security Administration to take the following two actions: update TSA’s Delegation of Authority to FSDs to clearly reflect the authority of FSDs relative to other airport stakeholders during security incidents and communicate the authority of the FSD position, as warranted, to FSDs and all airport stakeholders. We provided a draft of this report to DHS for its review and comment. On September 15, 2005, we received written comments on the draft report, which are reproduced in full in appendix III. DHS, in its written comments, generally concurred with our findings and recommendations, and agreed that efforts to implement these recommendations are critical to enable FSDs to effectively oversee security at the nation’s commercial airports. Regarding our recommendation that TSA update its Delegation of Authority to FSDs and communicate this information to FSDs and relevant stakeholders, DHS stated that a new restatement of the Delegation Order has been drafted by a working group composed of FSDs from the FSD Advisory Council and the Office of Chief Counsel. The Delegation Order has a new concise format that restates some of the FSDs’ previous authorities and proposes some new authorities, such as entering into interagency agreements and administering oaths, consistent with the evolving operational requirements in the field. DHS further stated that the Delegation Order is being internally coordinated for comment and clearance and will be presented for consideration of senior leadership and the Administrator. At that time, FSDs and airport stakeholders will be notified of their responsibilities under the new Delegation Order. TSA also provided additional technical comments on our draft report, which we have incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to appropriate congressional committees and subcommittees, the Secretary of Homeland Security, the Assistant Secretary of Homeland Security for TSA, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or at berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine the role of the Federal Security Director (FSD), we addressed the following questions: (1) What are the roles and the responsibilities of FSDs and how clear is their authority relative to that of other airport stakeholders during security incidents? (2) To what extent are FSDs involved in the development of federal aviation security policy? (3) How have FSDs at selected airports formed and facilitated partnerships with airport stakeholders, and how are these partnerships working? (4) What key changes has the Transportation Security Administration (TSA) made or planned to make to better support or empower the FSD position, and how have selected FSDs viewed these efforts? To address aspects of each of these objectives, we interviewed TSA’s Chief Operating Officer and other TSA officials from headquarters offices, including the Aviation Security Program Office, Office of Law Enforcement, Office of Compliance Programs, and Office of Human Resources. We reviewed the Aviation and Transportation Security Act, and other relevant laws, as well as TSA documents related to the FSD position, including delegations of authority, position descriptions, the Executive FSD Guide, performance management guidance, and the FSD Advisory Council Charter. We also reviewed TSA documents related to its recent operational changes, such as the Next Generation Hiring Guide, Communication Liaison Group Mission Statement, and the TSA Management Directive on Addressing Performance and Conduct Problems. We met with Department of Homeland Security (DHS) headquarters officials from the Border and Transportation Security Directorate, which oversees TSA, and Counter-Terrorism Division and Criminal Investigations Division officials within the Federal Bureau of Investigation (FBI) headquarters. To address all but the fourth objective, we also met with representatives of four national associations—the American Association of Airport Executives, Airports Council International, Air Transport Association, and Airport Law Enforcement Agencies Network. In addition, to address all of this report’s objectives, we conducted field visits to seven airports. We selected these airports because they were close to our staff and incorporated all five airport security categories— three airports with an FSD dedicated to a single airport and two sets of airports where the FSD was responsible for at least two airports. Specifically, we visited three category X airports (Los Angeles International Airport, California; Washington Dulles International Airport, Virginia; and Ronald Reagan Washington National Airport, Virginia); Bob Hope Airport, California (category I); Long Beach-Daugherty Field Airport, California (category II); Charlottesville-Albemarle Airport, Virginia (category III); and Shenandoah Valley Airport, Virginia (category IV). At each airport we visited, we met with local TSA officials and key airport stakeholders to discuss the role of the FSD and FSD-stakeholder partnerships and communication mechanisms. We met with the FSD (at the three airports with dedicated FSDs and the two hub airports) or the top-ranking TSA official (at the two spoke airports), as well as the Assistant FSDs for Law Enforcement and Regulatory Inspection, where these positions existed. During our meetings with FSDs, we also obtained their views on changes TSA had made or planned to make to enhance the FSD position. We also met with key airport stakeholders, including airport managers, airport law enforcement officials, station managers representing selected air carriers (15 representatives of 12 air carriers and, additionally, two air carrier representative groups specific to two airports we visited), and FBI Airport Liaison Agents and officials from DHS’s Customs and Border Protection as well as Immigration and Customs Enforcement (at the two international airports we visited). At each airport, we conducted a single joint interview with representatives from multiple air carriers, and we selected air carriers through different means. At airports with an air carrier council, we asked the council head to identify approximately three carriers. Although we left the final decision to the council head, we suggested that he or she include the largest or one of the largest carriers (according to the percentage of the airport’s passenger travel) at the airport, an independent air carrier, and an international carrier, if it was an international airport. At airports without an air carrier council, the Air Transport Association or the airport operator recommended the air carriers. At the smallest airports, we met with all air carriers because of the small numbers. Because we selected a nonprobability sample of airports to visit, the information we obtained during these visits cannot be generalized to all airports or FSDs across the nation. To corroborate what we learned from the five FSDs during our field visits, we telephoned 25 additional FSDs to obtain their views on a range of topics including recent TSA initiatives and federal aviation security policy. We also included selected questions—regarding their need for greater authority and flexibility—that we had posed in our March 2004 Web-based survey of all 155 FSDs, conducted to support other GAO aviation security reviews. This allowed us to make a rough comparison between the 2004 responses and 2005 responses to these questions. We selected a random sample of FSDs in place since September 1, 2004, to ensure they had an experience base from which to answer our questions. We excluded from the list the five FSDs we interviewed during our airport visits and individuals who were no longer FSDs. TSA reviewed our selection procedures but did not know the identities of the specific 25 FSDs we interviewed. The 25 FSDs were from a cross section of all five airport security categories. A GAO survey specialist who was involved in designing the Web-based survey, along with GAO staff knowledgeable about issues facing FSDs developed the structured telephone interview instrument. We conducted pretest interviews with 3 FSDs to ensure that the questions were clear and concise, and subsequently conducted the 25 telephone interviews from late April to early May 2005. Although the telephone interviews were conducted with a random sample of FSDs, the sample is too small to generalize the interview results to all FSDs across the nation with reliable statistical precision. The practical difficulties of conducting interviews may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data were analyzed can introduce unwanted variability into the results. We took steps in the development of the questions, the data collection, and the data analysis to minimize these nonsampling errors. For example, a survey specialist helped develop the interview questions in collaboration with GAO staff with subject matter expertise. Then, as mentioned earlier, the draft questions were pretested to ensure that the questions were relevant, clearly stated, and easy to comprehend. Interviews were conducted by GAO staff familiar with the subject matter and proper interviewing procedures. Finally, when the data were analyzed, a second, independent analyst checked to make sure that the results were correct. We conducted our work from August 2004 through September 2005 in accordance with generally accepted government auditing standards. In addition to the contact mentioned above, Glenn Davis, Assistant Director; David Alexander; Grace Coleman; Tracey Cross; Wayne Ekblad; David Hancock; Stuart Kaufman; Janice Latimer; Thomas Lombardi; and Lori Weiss made key contributions to this report.
The Transportation Security Administration (TSA) assigned Federal Security Directors (FSD) to oversee security, including the screening of passengers and their baggage, at the nation's more than 440 commercial airports. FSDs must work closely with stakeholders to ensure that airports are adequately protected and prepared in the event of a terrorist attack. This report addresses (1) the roles and responsibilities of FSDs and the clarity of their authority relative to that of other airport stakeholders during security incidents, (2) the extent to which FSDs formed and facilitated partnerships with airport stakeholders, and (3) FSDs' views of key changes TSA made to better support or empower the FSD position. TSA has issued guidance that clearly defines FSDs' roles and responsibilities. However, TSA's guidance related to FSDs' authority is outdated and lacks clarity regarding FSD authority relative to other airport stakeholders. TSA's document that delegates authority to FSDs gives them authority to supervise and deploy a TSA law enforcement force that was never established. Also, it does not clearly address FSD authority during a security incident relative to other parties with airport security responsibilities. At airports GAO visited, stakeholders said that this information had never been communicated to them and they were not always clear on the FSDs' authority in such situations. For example, confusion arose at one airport over whether the FSD had the authority to take certain actions during a security incident. In August 2005, TSA officials stated that they were updating guidance on FSDs' authority but had not finalized their revisions prior to this report's issuance. All of the FSDs and most stakeholders at the airports GAO visited reported developing partnerships that were generally working well. Communication and coordination were taking place among stakeholders at these airports, including meetings, briefings, and training exercises. According to TSA, partnerships with airport stakeholders are essential to FSDs' success in addressing aviation security and customer service needs. For example, FSDs rely on law enforcement stakeholders during security incidents since they do not have their own law enforcement resources. FSDs also rely on air carriers for passenger volume information to schedule screeners, and air carriers rely on FSDs for efficient screening that minimizes passenger wait times. TSA made changes in 2004 to better support or empower the FSD position, and most of the 25 FSDs we interviewed generally viewed these changes favorably. For example, most of the FSDs we interviewed were satisfied with TSA's new local hiring process that provided more options for FSDs to be involved in hiring screeners, and most said that the new process was better than the more centralized hiring process it replaced. Most FSDs we interviewed also saw value in the headquarters group TSA established to provide operational support to the field and a communication link among headquarters, field-based Area Directors, and FSDs.
When EESA was enacted on October 3, 2008, the U.S. financial system was facing a severe crisis that rippled throughout the global economy, moving from the U.S. housing market to an array of financial assets and interbank lending. The crisis restricted access to credit and made the financing on which businesses and individuals depended increasingly difficult to obtain. Further tightening of credit exacerbated a global economic slowdown. During the crisis, Congress, the President, federal regulators, and others undertook a number of steps to facilitate financial intermediation by banks and the securities markets. In addition to Treasury’s efforts, policy interventions were led by the Board of Governors of the Federal Reserve System (Federal Reserve) and the Federal Deposit Insurance Corporation. While the banking crisis in the United States no longer presents the same level of systemic concerns as it did in 2008, the financial system continues to face vulnerabilities, including lagging investor confidence, financial concerns about European banks and countries, and generally weak economic growth globally. The passage of EESA resulted in a variety of programs supported with TARP funding. (See table 1.) Treasury estimates several of the programs over their lifetimes will provide income to the government while others will incur a cost. Each program that remained active through September 30, 2012, will be addressed in this report. Many TARP programs have been winding down, and some have ended.Treasury has stated that when deciding to sell assets and exit TARP programs, it strives to: protect taxpayer investment and maximize overall investment returns within competing constraints, and promote the stability of financial markets and the economy by preventing disruptions to the financial system; bolster markets’ confidence in order to encourage private capital dispose of investments as soon as practicable. While Treasury has identified these goals for the exit process for many programs, we and others have noted that these goals, at times, can conflict. For example, we previously reported that deciding to unwind some of its assistance to General Motors (GM) by participating in an initial public offering (IPO) presented Treasury with a conflict between maximizing taxpayer returns and exiting as soon as practicable. Holding its shares longer could have meant realizing greater gains for the taxpayer but only if the stock appreciated in value. By participating in GM’s November 2010 IPO, Treasury tried to fulfill both goals, selling almost half of its shares at an early opportunity. Treasury officials stated that although they strove to balance these competing goals, they had no strict formula for doing so. Rather, they ultimately relied on the best available information in deciding when to start exiting this program. Moreover, in some cases Treasury’s ability to exercise control over the timing of its exit from TARP programs is limited. For example, Treasury has limited control over its exit from the Public-Private Investment Program (PPIP), because the program’s exit depends on when each public-private investment fund (PPIF) decides to sell its investments. Treasury continues to face this tension in its goals with a number of TARP programs. Figure 1 provides an overview of key dates for TARP implementation and the unwinding of some programs. In addition, appendix III provides information on Treasury’s administration of the TARP programs, including an update on the staffing challenges we have previously reported and Treasury’s reliance on the private sector to assist with TARP administration and operations. Most nonmortgage-programs continue to wind down, but the status and potential ending date of each nonmortgage-related TARP program varies. Key information includes, the estimated date, if known, that the program will end or stop acquiring new assets and no longer receive funding; Treasury’s estimated date for exiting the program or selling the assets it acquired while the program was open; outstanding assets, as applicable, as of September 30, 2012;the lifetime estimated costs (or income) for each program as calculated by Treasury. While repayments and income from CPP investments have exceeded the original outlays, the financial strength of participating institutions and the outcome of future securities auctions will help determine when the remaining institutions exit the program. As we have reported, Treasury disbursed $204.9 billion to 707 financial institutions nationwide from October 2008 through December 2009. As of September 30, 2012, Treasury had received $219.5 billion in repayments and income from its CPP investments, exceeding the amount originally disbursed by $14.6 billion (see fig. 2). The repayment and income amount included $193.2 billion in repayments of original CPP investments, as well as $11.8 billion in dividends, interest, and fees; $7.7 billion in warrant income; and $6.9 billion in net proceeds in excess of costs. After accounting for write-offs and realized losses on sales totaling $3.0 billion, CPP had $8.7 billion in outstanding investments as of September 30, 2012. Treasury estimates lifetime income of $14.9 billion for CPP as of September 30, 2012. Over half (417) of the 707 institutions that originally participated in CPP had exited the program as of September 30, 2012. Of the 417 institutions that have exited CPP, about 42 percent, or 175 institutions, exited by repaying their investments. Another 40 percent, or 165 institutions, exited CPP by exchanging their securities under other federal programs: 28 through TARP’s Community Development Capital Initiative (CDCI) and 137 through the non-TARP Small Business Lending Fund (SBLF) (see fig. 3). Of the remaining 18 percent of CPP recipients that exited the program, 56 had their securities sold by Treasury, 18 went into bankruptcy or receivership, and 3 merged with another institution. As of September 30, 2012, much of the $8.7 billion in outstanding investments was concentrated in a relatively small number of institutions. The largest single outstanding investment was $967.9 million, and the top three outstanding investments totaled $2.3 billion—27 percent of the amount outstanding. The top 25 remaining CPP investments accounted for $5.4 billion, or 63 percent of the outstanding amount. In addition, while 290 of the original 707 institutions remained in CPP, their $8.7 billion in outstanding investments accounted for just 4 percent of what Treasury originally disbursed. However, the number of institutions that have missed payments has been rising. The cumulative number of financial institutions that had missed at least one scheduled dividend or interest payment by the end of the month in which the payments were due rose from 219 as of August 31, 2011, to 242 as of August 31, 2012. These 242 institutions represent over one- third of the 707 institutions that participated in CPP and account for a cumulative total of 1,631 missed payments. As of August 31, 2012, 208 institutions had missed three or more payments and 142 had missed six or more. The total amount of missed dividend and interest payments was $376 million, although some of these payments were later made prior to the end of the reporting month. On a quarterly basis, the number of institutions missing dividend or interest payments due on their CPP investments increased steadily from 8 in February 2009 to 150 in August 2012, or about half of the institutions still in the program (see fig. 4). This increase occurred despite the reduced program participation, so the proportion of those missing scheduled payments has risen accordingly. The number of institutions missing payments has stabilized in recent quarters, but most of the institutions with missed payments had missed them repeatedly. In particular, 133 of the 150 institutions that missed payments in August 2012 had also missed payments in each of the previous three quarters. Moreover, these 150 institutions had missed an average of 7.3 additional previous payments, while 4 had never missed a previous payment. Institutions can elect whether to pay dividends and may choose not to pay for a variety of reasons, including decisions that they or their federal and state regulators make to conserve cash and maintain (or increase) capital levels. Institutions are required to pay dividends only if they declare dividends, although unpaid cumulative dividends generally accrue and the institution must pay them before making payments to other types of shareholders, such as holders of common stock. In May 2012, Treasury announced a strategy to wind down its remaining investments. The strategy includes three options that the department says will protect taxpayer interests, promote financial stability, and preserve the strength of the nation’s community banks. These options include allowing banks to repurchase or restructure their investments or selling Treasury-held stock through public auctions. In considering these options, Treasury will need to balance the goals of protecting taxpayer- supported investments while expeditiously unwinding the program. Treasury officials said that they would continue to evaluate the CPP exit strategy, but added that they expected to continue using these options for the foreseeable future. The first option allows banks, with the approval of their regulators, to repurchase from Treasury their preferred shares in full. Treasury points out that this strategy has been used since 2009 and is one it expects some banks to continue to use through late 2013. Under this option, Treasury’s ability to exit the program largely depends on the ability of institutions to repay their investments. Institutions will have to demonstrate that they are financially strong enough to repay the CPP investments in order to receive regulatory approval to exit the program. Dividend rates will increase from 5 percent to 9 percent for remaining institutions beginning in late 2013, a development that may prompt institutions to repay their investments. If broader interest rates are low, especially approaching the dividend reset, banks could have further incentive to redeem their preferred shares. A second option allows banks to restructure their investments, usually in connection with a merger or a plan to raise new capital. With this option, Treasury receives cash or other securities that generally can be sold more easily than preferred stock. Treasury officials said that, as of early October 2012, approximately 28 restructurings had occurred. The officials expected a limited number of restructurings to continue, but added that because Treasury’s investments were sometimes sold at a discount during restructuring, they would approve the sales only if the terms represented the best deal for taxpayers. Under the third option, Treasury may sell its preferred stock through public auctions. Treasury conducted the first such auction of CPP investments in March 2012 and reported that it generated strong investor interest. As of September 30, 2012, Treasury had conducted six auctions resulting in the sale of 40 investments with total net proceeds of about $1.3 billion. Treasury also reported that this option can be beneficial for community banks that do not have easy access to the capital markets, because it could attract new, private capital to replace the temporary TARP support. Treasury expects this option to continue to be part of its effort to wind down CPP. Thus far, Treasury has sold investments individually, but noted that it might combine other investments, particularly smaller ones, into pools. Whether Treasury sells stock individually or in pools, the outcome of this option will depend largely on investor demand for these securities. Treasury disbursed $570 million to its 84 CDCI participants and completed funding the program in September 2010 (see fig. 5). As we previously reported, CDCI is structured much like CPP, in that it has provided capital to financial institutions by purchasing equity and subordinated debt from them. No additional funds are available through the program, as CDCI’s funding authority expired in September 2010. As of September 2012, Treasury expects CDCI will cost approximately $200 million over its lifetime, less than half of the $570 million obligated to the program. Officials stated that CDCI will have a lifetime cost, while CPP is estimated to result in lifetime income, in part because CDCI provides a lower dividend rate that increases the net financing cost to Treasury. Also, unlike CPP, the program does not require warrants from participating institutions that would have helped offset Treasury’s costs. As of September 30, 2012, two CDCI participants have repaid Treasury $2.85 million, and Treasury has received $22 million in dividend payments from CDCI participants. As with CPP, Treasury must continue to monitor the performance of CDCI participants because their financial strength will affect their ability to repay Treasury. According to Treasury officials, Treasury will continue to hold its CDCI investments and has not made any disposition decisions about the program. However, they said that when Treasury decides to exit the CDCI program, it will need tools in place similar to those used by CPP institutions to exit the CPP program. As of September 30, 2012, 5 of the 84 CDCI participants had missed at least one dividend or interest payment, and 2 of the participants had paid accrued and unpaid dividends after missing the initial scheduled payment date(s), according to Treasury. While the continuing weak economy could negatively affect distressed communities and the CDFIs that serve them, the program’s low dividend rates may help participants remain current on payments. When Treasury will exit CDCI is unknown, but the dividend rate that program participants pay increases in 2018. However, Treasury officials noted that the program was intended to be long term and said that they believed the program was meeting its objective by providing long-term, low-cost capital. CDCI institutions have an opportunity to keep CDCI capital in their communities, which are usually moderate and low income, for a longer time. Treasury officials indicated that, as with CPP investments, Treasury’s current practice was to hold CDCI investments but that this strategy could change, and Treasury could opt to sell its CDCI shares. Since investing roughly $80 billion in the automotive industry, as of September 30, 2012, Treasury had received more than $40 billion in proceeds. Nevertheless, Treasury still held substantial investments in GM and Ally Financial, which included 32 percent of GM’s common stock, 74 percent of Ally Financial’s common stock, and $5.9 billion of Ally Financial’s mandatory convertible preferred stock (see fig. 6). Treasury officials told us that they continued to monitor GM’s financial condition as well as overall market and economic conditions as they developed a divestment strategy for GM. In general, GM’s financial condition has improved since the IPO, but the company continued to address challenges with its European operations. Specifically, GM’s net income rose 43 percent—from about $6.5 billion in 2010 to about $9.3 billion in 2011, with the company achieving 11 straight quarters of profitability since its formation in July 2009. However, the company saw a decline in net income in 2012—from about $8.5 billion in the first three quarters of 2011 to about $5.1 billion in the first three quarters of 2012. GM officials reported this decline was largely due to increased losses in the company’s European Operations, a region where the automotive industry as a whole struggles. The company continues to post losses in Europe, with vehicle sales declining 7.4 percent between the first three quarters of 2011 and the first three quarters of 2012. In contrast, GM’s North American sales increased 3.2 percent over 2011 levels for that same time period. The company has reported taking actions to help restructure its European operations and expects financial results to improve. The company has also recently made a number of other changes in an effort to improve its financial condition and flexibility. In June 2012, in an effort to de-risk its pension plans and further strengthen its balance sheet, GM announced that it would provide certain U.S. salaried retirees with a continued monthly payment administered and paid by The Prudential Insurance Company of America and others with a voluntary lump-sum payment option, which it estimated would reduce its salaried pension obligation by about $29 billion. In November 2012 GM announced plans for its captive financing subsidiary, GM Financial, to acquire Ally Financial, Inc.’s International Operations in 14 countries, which the company expects to drive higher vehicle sales in China, Mexico, Europe and Latin America. Also in November 2012, GM secured a new $11 billion revolving to help improve GM Financial’s financial flexibility. In December 2012, two years after the GM IPO, Treasury announced that it would sell 200 million or 40 percent of its remaining shares in the company, and intends to sell the other remaining 300.1 million shares through a pre-arranged written trading plan within the next 12 to 15 months, subject to market conditions. In May 2011, we reported that GM’s share price would have to increase dramatically from current levels to an average of more than $54 for Treasury to fully recoup its investment. Because the December 2012 sale price of $27.50 per share is considerably less than the breakeven level, GM’s shares will now have to increase to roughly $72 per share, or more than double the average 2012 share price, for Treasury to fully recoup its investment (see fig. 7). In addition to its outstanding investments in GM, Treasury remains heavily invested in Ally Financial. According to Treasury officials, the department continues to explore all potential options for divesting its interest in Ally Financial, including public and private options such as a possible IPO or selling its equity in a private transaction. However, since we last reported on Ally Financial the company has undergone a number of changes that could affect the timing of Treasury’s exit. For instance, on May 14, 2012, Ally Financial’s mortgage subsidiary Residential Capital, LLC, and certain of its subsidiaries, filed for Chapter 11 bankruptcy. The company is also in the process of selling its international business, which includes auto finance, insurance, and banking and deposit operations in Canada, Mexico, Europe, the United Kingdom, China, and South America. According to Ally Financial, contracts for each of these countries have been signed, and deal closings are expected to occur in stages throughout the first half of 2013. Ally Financial reported that these actions would improve the financial viability of the company and increase the likelihood of repaying Treasury. Ally’s net income for the first three quarters of 2012 has declined from the same period in 2011— decreasing from a positive $49 million in 2011 to a loss of $204 million in 2012. This loss is primarily attributable to charges related to the Residential Capital, LLC, bankruptcy filing in the second quarter of 2012. The challenges facing Ally Financial and reductions in the share prices of common stock holdings in GM highlight how market conditions contribute to the risks associated with AIFP and the variability of lifetime cost estimates. The projected lifetime cost of AIFP has increased since 2010 and as of September 30, 2012, was estimated at $24.3 billion—about $700 million more than in September 2011 and almost $10 billion more than in September 2010. According to Treasury officials, Treasury continues to balance its goals of exiting as soon as practicable and maximizing taxpayer returns. On December 11, 2012, Treasury announced that it agreed to sell all of its remaining shares of AIG common stock, and on December 14, 2012, announced that it had received payment from its final sale of AIG stock, bringing to an end the government’s assistance to the company. Prior to TARP, in September 2008 AIG received assistance in the form of a loan from the Federal Reserve Bank of New York (FRBNY). In exchange, AIG provided shares of preferred stock to the AIG Credit Facility Trust that FRBNY created. These preferred shares were converted to common stock and then transferred to the Treasury. In addition to this and other non-TARP support, Treasury provided assistance to AIG in November 2008 through TARP by purchasing preferred shares that were also later converted to common stock. In late January 2011, following the recapitalization of AIG, Treasury owned 1.655 billion common shares in AIG (1.092 billion TARP and 0.563 billion non-TARP) and a $20.3 billion preferred interest in two special purpose vehicle subsidiaries of AIG. In May 2011, Treasury began to sell its AIG shares. Since then and through six offerings, Treasury has sold all of its shares of AIG common stock, both TARP and non-TARP shares. The shares it sold in May 2011 and March 2012 to the public brought $29 per share; the shares it sold in May and August of 2012 to the public brought $30.50 per share; and the shares it sold in September and December 2012 to the public brought $32.50 per share. The share price, on a weighted average basis, was $31.18, exceeding Treasury’s break-even price of $28.73 per share on an overall cost basis for both the TARP and non-TARP shares. At an average price of $31.18 per share, the returns include about $34 billion on the 1.092 billion TARP shares and $17.6 billion on the 563 million non- TARP shares—totaling over $51.6 billion in proceeds. (See table 2.) While it has sold its remaining AIG common shares, Treasury continues to hold warrants to purchase approximately 2.7 million shares of AIG common stock. Treasury received approximately $72.8 billion of proceeds and cancelled $2 billion of its commitment, undrawn, on the AIG investments, exceeding the $69.8 billion total Treasury commitment to assist AIG by approximately $5 billion. As of December 2012, the total reflected the $54.3 billion generated on Treasury’s common stock sales and AIG repaid $20.3 billion on the preferred interests in two special purpose vehicle subsidiaries of AIG. In addition, Treasury said that it received $930 million in interest and participation rights on the special purpose vehicle investments. Treasury’s returns from selling common stock have been in addition to those realized by the returns of other assistance to AIG. With AIG’s final repayment of all FRBNY assistance to the company in 2012, FRBNY had realized returns in the form of interest, dividends, and fees in excess of the assistance it provided AIG through a revolving credit facility and several special purpose vehicles. As of September 30, 2012, prior to the December 2012 sale of AIG shares. Treasury lowered its expected lifetime cost from $24.3 billion to $15.3 billion for its TARP shares and increased its expected income from $12.8 billion to $17.6 billion for its non-TARP shares, changing what was an expected net estimated cost of $11.5 billion to a net expected gain of $2.3 billion for assistance to AIG. The Federal Reserve established TALF in an effort to reopen the securitization markets and improve access to credit for consumers and businesses. As of September 30, 2012, Treasury is committed to contributing as much as $1.4 billion to provide credit protection to FRBNY for TALF loans should borrowers fail to repay and surrender the asset- backed securities (ABS) or commercial mortgage-backed securities (CMBS) pledged as collateral. To date, Treasury has disbursed $100 million for start-up costs related to the FRBNY-established TALF special- purpose vehicle, TALF LLC (see fig. 8). TALF LLC receives a portion of the interest income earned on TALF loans (known as excess interest under the program) that can be used to purchase any borrower- surrendered collateral from FRBNY. FRBNY stopped issuing new TALF loans in 2010.report that FRBNY TALF loan balances, which were $29.7 billion in September 2010, had fallen to $11.3 billion as of September 30, 2011, and to $1.5 billion as of September 26, 2012. Agency officials also indicated that all TALF loans were current and that borrowers continued to pay down their loans. Treasury officials Excess interest in TALF LLC grew by more than 30 percent between October 2010 and September 2011, rising from $523 million to $685.6 million. Over the next year (September 2011 to September 2012), it grew to $754.2 million. If the balance of excess interest in TALF LLC exceeds the value of any surrendered collateral, Treasury may not need to disburse any additional funds for the program and could instead realize lifetime income because it will receive 90 percent of funds remaining in TALF LLC after all obligations are repaid and the program ends. Further, the equity that borrowers hold in TALF collateral has grown since TALF loans were first issued. As of September 30, 2012, Treasury estimated that TALF would result in a lifetime income of approximately $517 million. Treasury officials told us in September 2012 that they did not have any particular concerns about the CMBS market that would have an effect on current TALF holdings, and that prices remained strong throughout 2012. Despite these positive trends, the officials told us that FRBNY and Treasury staff will continue to monitor market conditions and credit rating agency actions that could affect TALF assets. As we have previously reported, market value fluctuations could affect future results. Treasury expects to exit TALF by 2015, although it does not have complete control over its exit because its role in TALF is secondary to that of the Federal Reserve. Treasury models loan repayments using TALF loan terms and data provided by the Federal Reserve and projects repayment schedules, collateral cash flows, prepayments, and performance loss rates. Based on these analyses, Treasury expects that the last TALF loan will be paid in 2015. No borrowers have surrendered TALF collateral to date, and all loans are current. However, should TALF LLC be required to purchase and manage TALF assets, Treasury could be involved in TALF beyond 2015, as TALF assets may have maturity dates that extend beyond the loan maturity dates. Treasury created PPIP, partnering with private funds, to purchase troubled mortgage-related assets from financial institutions. Treasury provided the PPIFs with equity and loan commitments of approximately $7.4 billion and $14.7 billion, respectively, but disbursed a total of $18.6 billion. PPIFs have finished their 3-year investment period, which started at each fund’s inception date. There were nine PPIFs established through PPIP, the first of which was liquidated in the first quarter of 2010 and the last terminated in December 2012. PPIFs with terminated investment periods can no longer draw money from Treasury or make new investments under this authority, and Treasury has not granted approval for any new draws under the PPIP program. With the investment periods ended, PPIFs must begin unwinding their positions and completely divest within 5 years, although Treasury can decide to extend this period for up to 2 additional years for each PPIF. According to Treasury, the PPIF liquidated in the first quarter of 2010 yielded Treasury a profit of $20.1 million on its $156.3 million equity investments and the PPIF whose investment period ended in September 2011 returned all of its equity proceeds to Treasury and fully wound down its fund. Three additional PPIFs have returned 100 percent of Treasury and private investors’ equity investments in the fund with equity gains and fully repaid Treasury’s debt. According to Treasury, these three funds have a small amount of capital remaining to unwind their operations. The investment periods for the remaining PPIFs have subsequently ended and thus have begun to unwind. According to Treasury, as of September 30, 2012, PPIFs had accessed about 86 percent of the equity and debt available through Treasury and private investors, and had repaid Treasury a total of $6.7 billion in debt financing. In addition, since September 30, 2012, Treasury has received around $5.5 billion of payments under PPIP. As of September 30, 2012, Treasury estimates that PPIP will ultimately result in lifetime income of about $2.4 billion (see fig. 9). As of November 5, 2012, the four PPIFs that have sold all of their remaining investments and returned substantially all of the proceeds have generated more than $1.4 billion in realized gains and income on Treasury’s equity and warrant investments. However, according to Treasury, the ultimate results will depend on a variety of factors, including when PPIFs choose to divest and the performance of the assets they hold. Treasury officials said that their role while PPIFs were in their investment periods was to follow the progress of each PPIF’s investment strategy and the risks and target returns of the portfolios. In this role, Treasury staff and contractors monitored compliance with PPIP terms. With the end of the PPIFs’ investment periods, Treasury officials said that Treasury would focus on the strategies PPIFs used to maintain and ultimately divest themselves of their portfolios. Also, Treasury officials said that the contractors hired to provide investment fund consulting and analysis of PPIF portfolios would continue to provide such services in this postinvestment period. Current PPIP terms stipulate an exit by 2017.found in some other TARP programs, Treasury officials do not face the same consideration of competing goals in exiting the program because the terms of the program dictate when the PPIFs must wind down. However, Treasury officials noted that PPIFs can liquidate at any time before the exit date. Officials also noted that the program was designed to discourage firms from keeping their investments outstanding longer than needed by the PPIF fund managers after the investment period expired, at which time PPIFs would no longer have access to debt financing from Treasury, unless permitted by provisions within the loan agreement and approved by Treasury. Now that the investment periods have terminated, PPIFs must pay down their Treasury loans and make distributions to their partners as the PPIFs receive proceeds from RMBS and CMBS payments and dispositions. Officials noted that this program structure created an incentive for PPIFs to sell their assets promptly once their access to Treasury ended. The officials also said that they were not concerned about any effects of PPIPs’ eventual winding down on markets, as the 5-year period for unwinding would likely mitigate them. To help meet EESA’s goals of preventing avoidable foreclosures and preserving homeownership, Treasury allocated $45.6 billion in TARP funds to three mortgage programs: Making Home Affordable (MHA), which has several components, including the Home Affordable Modification Program (HAMP); Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund or HHF); and Department of Housing and Urban Development’s (HUD) Federal Housing Administration (FHA) Refinance of Borrowers in Negative Equity Positions (FHA Short Refinance or FHASR). The bulk of the funds allocated to TARP programs to help distressed borrowers avoid foreclosure—$40.1 billion—had not yet been disbursed as of September 30, 2012. The estimated lifetime cost for the mortgage programs is $45.6 billion. Unlike for the programs discussed previously, Treasury will continue to disburse TARP funds under the mortgage programs for several more years. Specifically, homeowners have until December 31, 2013, to apply for assistance under MHA programs, and Treasury will continue to pay incentives for up to 5 years after the last permanent modification begins. Treasury’s obligation under FHASR will continue until September 2020. Unlike TARP expenditures under some other programs, such as those that provided capital infusions to banks, expenditures under these programs are generally direct outlays of funds with no provision for repayment. The centerpiece of Treasury’s MHA program is HAMP, which seeks to help eligible borrowers facing financial distress avoid foreclosure by reducing their monthly first-lien mortgage payments to more affordable levels. Treasury announced HAMP (now called HAMP Tier 1) on February 18, 2009. Generally, HAMP Tier 1 is available to qualified borrowers who occupy their properties as their primary residences and whose first-lien mortgage payment is more than 31 percent of their monthly gross income. Treasury shares with mortgage holders or investors the cost of lowering borrowers’ monthly payments to 31 percent of monthly income for a 5-year period. In an effort to reach more borrowers, Treasury established HAMP Tier 2, which servicers began implementing in June 2012. HAMP Tier 2 is available for either owner- occupied or rental properties, and borrowers’ monthly mortgage payments prior to modification do not have to exceed a specified threshold. Treasury also provides incentive payments for modifications under HAMP Tier 1 and HAMP Tier 2 to servicers and investors, and to borrowers under HAMP Tier 1. Treasury originally announced that up to 3 million to 4 million borrowers However, Treasury reported that through would be helped under HAMP. September 2012 only about 1.1 million permanent modifications had been started.experienced a significant decline, as shown in figure 10. Since June 1, 2010, when Treasury began requiring all servicers to perform full income Monthly activity peaked during the early part of 2010 and has verification to determine a borrower’s eligibility for HAMP before offering a trial modification, the monthly number of new trial modifications reported by servicers has remained below 40,000. Monthly trial modification starts during September 2012 were the lowest reported since the initial roll-out of the program in 2009. Treasury has not yet published data on the number of trial periods or permanent modifications started under HAMP Tier 2, according to Treasury officials. In addition to HAMP, Treasury has implemented a number of additional MHA components that use TARP funds to augment or complement the HAMP first-lien modification program: Home Affordable Foreclosure Alternatives Program. The Home Affordable Foreclosure Alternatives Program offers assistance to homeowners looking to exit their homes through a short sale or deed- in-lieu of foreclosure. Treasury offers incentives to eligible homeowners, servicers, and investors under the program. Through September 2012, servicers reported completing about 74,000 short sales and 1,900 deeds-in-lieu under the program. Home Price Decline Protection Incentives. This program provides investors with additional incentives to modify loans under HAMP on properties located in areas where home prices have recently declined and where investors are concerned that price declines may persist. Through September 2012, Treasury had paid about $269 million to investors in program incentives to support the HAMP modification of more than 154,000 loans. Principal Reduction Alternative (PRA). PRA requires servicers to evaluate the benefit of principal reduction for mortgages that have a loan-to-value ratio of 115 percent or more and that are not owned or guaranteed by Fannie Mae or Freddie Mac. Servicers are required to evaluate homeowners for PRA when evaluating them for a HAMP first-lien modification but are not required to actually reduce principal as part of the modification. Through September 2012, servicers reported having started about 78,000 permanent modifications with principal reductions under PRA. Second Lien Modification Program. The Second Lien Modification Program provides additional assistance to homeowners receiving a HAMP first-lien permanent modification who have an eligible second lien with participating servicers. When a borrower’s first lien is modified under HAMP, participating program servicers must offer to modify the borrower’s eligible second lien according to a defined protocol. This assistance can result in a modification or even full or partial extinguishment of the second lien. On February 16, 2012, Treasury doubled the amount of incentives provided on second-lien modifications that included principal reduction and became effective on or after June 1, 2012. Through September 2012, servicers reported starting about 97,000 second-lien modifications, of which about 24,000 fully extinguished the second lien. Government-insured or guaranteed loans (FHA-HAMP and RD- HAMP). FHA and the Department of Agriculture’s Rural Housing Service (RHS) have implemented modification programs similar to HAMP Tier 1 for FHA-insured and RHS-guaranteed first-lien mortgage loans. Each of these programs results in loan modifications that provide borrowers with an affordable monthly mortgage payment equal to 31 percent of the homeowners’ monthly gross income and requires borrowers to complete a trial payment plan before permanent modification. If a modified FHA-insured or RHS-guaranteed mortgage loan meets Treasury’s eligibility criteria, the borrower and servicer can receive TARP-funded incentive payments from Treasury. Treasury reported that there were nearly 9,100 permanent modifications started that received Treasury FHA-HAMP incentives through September 2012. According to Treasury officials, servicers had reported only 11 modifications that qualified for Rural Development (RD)-HAMP incentives as of September 30, 2012. Treasury/FHA Second Lien Program (FHA2LP). Under this program, Treasury provides incentive payments to servicers and investors if they partially or fully extinguish second liens associated with an FHA Short Refinance. Servicers can receive a one-time payment of $500 for each second lien extinguished under the program, and investors are eligible for incentive payments based on the amount of principal extinguished. According to Treasury, no second liens had been extinguished and no incentive payments made under the Treasury/FHA Second Lien Program as of September 30, 2012. Treasury obligated $29.9 billion to MHA, of which nearly $4.0 billion had been disbursed as of September 2012 (see fig. 11). Treasury estimated that an additional $6.5 billion could be spent on incentives for HAMP modifications and other MHA interventions that were already in effect as of September 2012, assuming none of these modifications default. After combining these potential incentive payments with incentives already paid, Treasury estimated that $19.4 billion of the $29.9 billion remain available for future modifications and other interventions. In addition to the MHA program, Treasury has allocated $7.6 billion in TARP funds for HHF, which seeks to help homeowners in 18 states hit hardest by unemployment and house price declines (Alabama, Arizona, California, Florida, Georgia, Illinois, Indiana, Kentucky, Michigan, Mississippi, Nevada, New Jersey, North Carolina, Ohio, Oregon, Rhode Island, South Carolina, and Tennessee) plus the District of Columbia. States were chosen because they had experienced steep home price declines, high levels of unemployment in the economic downturn, or both. According to Treasury, each state housing agency gathered public input to implement programs designed to meet the distinct challenges homeowners in their state were facing. As a result, HHF programs vary across states, but services offered often include mortgage payment assistance for unemployed homeowners and reinstatement assistance to cover arrearages (e.g., one-time payment to bring a borrower’s delinquent mortgage current). Treasury reported that it had disbursed approximately $1.5 billion to the states for the HHF program as of September 2012. States reported having spent about $742 million through September 2012 to help more than 77,000 homeowners since the program began, and $199 million on administrative expenses. Treasury has also allocated $8.1 billion in TARP funds to the FHA Short Refinance program to enable homeowners whose mortgages exceed the value of their homes to refinance into more affordable mortgages. This opportunity allows borrowers who are current on their mortgage—or if they are delinquent, who successfully complete a trial period—to qualify for an FHA Short Refinance loan if the lender or investor writes off the unpaid principal balance of the original first lien mortgage by at least 10 percent. Treasury entered into a letter of credit facility with Citibank in order to fund up to $8 billion of any losses associated with providing FHA Short Refinance loans. Treasury’s commitment extends until September 2020, and to the extent that FHA experiences losses on those refinanced mortgage loans, Treasury will pay claims up to the predetermined percentage after FHA has paid its portion of the claim. Treasury will also pay a fee to the issuer of the letter of credit based on the amount of funds drawn against the letter of credit and any unused amount. The terms of the agreement cap the fee at $117 million. As of September 30, 2012, FHA had insured 1,774 loans with a total face value of $307 million under the refinance program. As of September 30, 2012, Treasury had paid about $7.2 million in fees to Citibank, which issued the letter of credit. Treasury also placed $50 million in a reserve account to cover any future loss claims on these loans, although no funds have been disbursed for loss claim payments. Through its monitoring of processes put in place to improve servicers’ communication with borrowers and resolution of disputes, Treasury has identified some implementation challenges but has also found improvements in performance. One process, which Treasury announced in May 2011, requires large servicers participating in HAMP to identify a “relationship manager” to serve as the borrower’s single point of contact throughout the delinquency or imminent default resolution process, effective September 1, 2011. By implementing this requirement, called the single point of contact requirement, Treasury was seeking to enhance communications between servicers and borrowers during the delinquency resolution process.point of contact requirement, Treasury adopted compliance review procedures to determine whether servicers (1) had established a single point of contact in accordance with MHA requirements, (2) were monitoring assignments and activities to verify that they were in accordance with internal policies and MHA guidance, and (3) had created To monitor servicers’ implementation of the single written notices of assignments or changes and sent accurate, timely information on them to borrowers. Following the effective date of Treasury’s requirement, Treasury’s compliance agent, MHA-C, used these procedures to assess servicers’ implementation of the single point of contact requirement during dedicated compliance reviews, according to Treasury. These initial reviews revealed some initial challenges with implementing the requirement, including delays in assigning relationship managers to borrowers and poor communication of assignments and reassignments. Servicers’ performance was reflected in the qualitative measures of internal controls included in the servicer assessments that Treasury publishes quarterly, according to Treasury. The reviews also identified areas in which the servicers differed in their implementation of the requirements, such as the precise timing of the assignment of relationship managers. Treasury officials said that servicers have many options for appropriately implementing the requirement, given the flexibility provided in its guidance, and noted that servicers were making progress in addressing the issues identified in the initial compliance reviews. However, Treasury officials also stated that they were considering whether to issue additional guidance to clarify the requirements and to help ensure greater consistency across servicers. Treasury put in place another process aimed at enhancing borrower assistance: a case escalation process for resolving borrower inquiries and disputes. In June 2010, we reported that it was unclear whether the process that Treasury had established for resolving concerns about HAMP eligibility determinations was effective. The escalation process in place at that time lacked standard requirements for complaint tracking, and Treasury had not clearly communicated the availability of the escalation process through the HOPE Hotline to the borrower. In November 2010, Treasury announced requirements for servicers to adopt a standard process for resolving certain borrower MHA disputes—called escalated cases—effective February 1, 2011. Treasury now requires that servicers have procedures and personnel in place to provide timely and appropriate responses to escalated cases. Escalated cases include but are not limited to: allegations that the servicer did not assess the borrower for the applicable MHA program(s) according to program guidelines; inquiries regarding inappropriate program denials or the content of a nonapproval notice; and disputes or inquiries about the initiation or continuance of a foreclosure action in violation of program guidelines. In addition, MHA Help and the HAMP Solution Center (collectively referred to as the MHA support centers) can refer escalated cases to the servicer on behalf of either borrowers or third parties assisting borrowers. MHA Help, which is a team of specialists dedicated exclusively to working with borrowers and servicers to resolve escalated MHA cases, receives cases from borrowers who call the HOPE Hotline. A third party, such as a housing counselor, may escalate a case through the HAMP Solution Center. In its capacity as the MHA program administrator, Fannie Mae staffs the HAMP Solution Center and oversees vendors that staff MHA Help and the HOPE Hotline, according to Treasury. In order to resolve a case escalated through these support centers, the servicer must obtain the concurrence of the center that escalated the case with the proposed resolution. If the case cannot be resolved at the support center, it is forwarded to Treasury, which works with the servicer to resolve the issue Treasury has adopted procedures to monitor the performance of servicers and borrower support centers in resolving escalated cases. Treasury currently publicly reports on one servicer performance measure related to escalations: the average number of days required to resolve escalated cases involving loans not owned or guaranteed by Fannie Mae or Freddie Mac. Treasury established a target of 30 calendar days or fewer (including processing time by the support center). In the most recent two quarters for which data were available (the second and third quarters of 2012), the nine largest MHA servicers achieved that target. In the two prior quarters, one of these servicers did not achieve the target. In addition to reporting on the timeliness of the escalation process, Treasury conducts other reviews to monitor the program administrator’s management of its vendors and the outcomes of the process. The program administrator prepares weekly and monthly performance reports for the HOPE Hotline and the MHA support centers. These reports include case escalation information for the larger MHA servicers. Treasury officials in the Office of Financial Agents and the Homeownership Preservation Office review these reports with the program administrator and, as necessary, its vendors. In addition, Treasury reviews a sample of escalated case files monthly to ensure that staff at the support centers are providing the services Treasury expects of them. Staff from the Homeownership Preservation Office score the files— five from each support center—on seven criteria that indicate whether: the resolution template was properly completed; the full course of the resolution could be easily identified and understood; the case was resolved according to the escalation case process; MHA policy and guidance were appropriately applied; engagement with the servicer led to timely closure of the case; reasonable efforts had been made to reach the requestor and resolve the inquiry; and the support center representative demonstrated homeowner advocacy. Treasury began scoring escalated case files in January 2012, and according to documents Treasury provided to us, the support centers’ scores improved substantially between January 2012 and June 2012. Treasury officials said that they had provided training for staff of the support centers to serve as advocates for homeowners and had provided additional training for this purpose. The most notable improvement in the support centers’ scores was in the area of demonstrating homeowner advocacy. Treasury’s continued attention to resolutions of escalated cases and the performance of the support centers and servicers is instrumental in helping to ensure that eligible borrowers receive appropriate assistance. We provided a draft of this report to Treasury for its review and comment. In its written comments, reproduced in appendix IV, Treasury generally concurred with our findings. We also provided relevant portions of the draft report to Ally Financial and General Motors to verify the factual information they provided about their companies and business trends. Treasury, Ally Financial, and General Motors provided technical comments that we have incorporated as appropriate. We are sending copies of this report to the Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, and Treasury. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or clowersa@gao.gov for questions about non-mortgage-related TARP programs, or Mathew Scire at (202) 512-8678 or sciremj@gao.gov for questions about mortgage-related TARP programs. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives in this report were to examine the condition and status of (1) nonmortgage-related Troubled Asset Relief Programs (TARP) programs and (2) TARP mortgage programs, including Treasury’s efforts to ensure that servicers are implementing two new requirements. To assess the condition and status of all the nonmortgage-related programs initiated under the TARP, we collected and analyzed data about program utilization and assets held, as applicable, focusing primarily on financial information that we had audited in the Office of Financial Stability’s (OFS) financial statements, as of September 30, 2012. In some instances we provided more recent, unaudited financial information. The financial information includes the types of assets held in the program, obligations that represent the highest amount ever obligated for a program (to provide historical information on total obligations), disbursements, and income. We also provide information on program start dates, defining them based on the start of the first activity under a program, and we provide program end dates, based on official announcements or program terms from the Department of the Treasury (Treasury). Finally, we provide approximate program exit dates—either estimated by Treasury or actual if the exit already occurred—that reflect the time when a program will no longer hold assets that need to be managed. We also used OFS cost estimates for TARP that we audited as part of the financial statement audit. In addition, we tested OFS’s internal controls over financial reporting as they relate to our annual audit of OFS’s financial statements. The financial information used in this report is sufficiently reliable to assess the condition and status of TARP programs based on the results of our audits of fiscal years 2009, 2010, 2011, and 2012 financial statements for TARP. Further, we reviewed Treasury documentation such as program terms, press releases, and reports on TARP programs and costs. Also, we interviewed OFS program officials to determine the current status of each TARP program, the role of TARP staff while most programs continue to unwind, and to update what is known about exit considerations for TARP programs. Other TARP officials we interviewed included those responsible for financial reporting. Additionally, in reporting on these programs and their exit considerations we leveraged our previous TARP reports and publications from the Special Inspector General for TARP, as appropriate. In addition, we did the following: For the Capital Purchase Program, we used OFS’s reports to describe the status of the program, including the amount of investments outstanding, the number of institutions that had exited the program, and the amount of dividends paid. In addition, we reviewed Treasury’s press releases on the program and interviewed officials from Treasury. For the Community Development Capital Initiative, we interviewed program officials to determine what exit concerns Treasury has for the program. To update the status of the Automotive Industry Financing Program and Treasury’s plans for managing its investment in the companies, we leveraged our past work and reviewed information on Treasury’s plans for overseeing its remaining financial interests in General Motors (GM) and Ally Financial, including Treasury reports. To obtain information on the current financial condition of the companies, we reviewed information on GM’s and Ally Financial’s finances and operations, including financial statements and industry analysts’ reports. We also interviewed officials from Treasury. To update the status of the American International Group, Inc. (AIG) Investment Program (formerly the Systemically Significant Failing Institutions Program), we reviewed relevant documents from Treasury and other parties. For the AIG Investment Program, these documents included Emergency Economic Stabilization Act of 2008 (EESA) monthly 105(a) reports provided periodically to Congress by Treasury, public information made available by the Federal Reserve Bank of New York, and other relevant documentation such as AIG’s financial disclosures and Treasury’s press releases. We also interviewed officials from Treasury. For the Term Asset-Backed Securities Loan Facility (TALF), we reviewed program terms and requested data from Treasury about loan prepayments and TALF LLC activity. Additionally, we interviewed OFS officials about their role in the program as it continues to unwind. To update the status of the Public-Private Investment Program, we analyzed program quarterly reports, term sheets, and other documentation related to the public-private investment funds. We also interviewed OFS staff responsible for the program to determine the status of the program while it remains in active investment status. To obtain the final status for Small Business Administration (SBA) 7(a) Securities Purchase Program that Treasury exited and for which Treasury no longer holds assets that it must manage, we reviewed Treasury’s recent reports and leveraged our past work. To assess the status of TARP-funded mortgage programs and Treasury’s efforts to ensure servicers are implementing the Making Home Affordable (MHA) single point of contact and resolution of escalated cases requirements, we reviewed Treasury reports, guidance, and documentation and interviewed Treasury officials. Specifically, to determine the status of Treasury’s TARP-funded housing programs, we obtained and reviewed Treasury’s published reports on the programs and servicer performance, as well as guidelines and related updates issued by Treasury for each of the programs. In addition, we obtained information from and interviewed Treasury officials about the status of the TARP-funded mortgage programs, including the actions Treasury had taken to address our prior recommendations. To assess the status of Treasury’s efforts to ensure servicers are implementing the MHA single point of contact requirement, we reviewed Treasury’s compliance review procedures and review findings related to single point of contact for several of the largest MHA servicers. To assess Treasury’s oversight of the escalated case resolution process, we obtained documentation from Treasury of its process for monitoring the MHA borrower support centers—MHA Help and the Home Affordable Modification Program (HAMP) Solution Center—and reviewed monthly performance reports. We also interviewed Treasury officials about their oversight of the single point of contact requirement and case escalation process. We conducted this performance audit from September 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The SBA 7(a) Securities Purchase Program was launched as part of TARP to help facilitate the recovery of the secondary market for small business loans. Under this program, Treasury purchased securities that comprised the guaranteed portion of SBA 7(a) loans. These loans finance a wide range of small business needs, including working capital, machinery, equipment, furniture, and fixtures. Treasury originally invested $367 million in 31 SBA 7(a) securities between March and September 2010. These securities comprised more than 1,000 loans from 17 different industries, including retail, food services, manufacturing, scientific and technical services, health care, and educational services. Since Treasury began its purchases, the SBA 7(a) market has recovered with new SBA 7(a) loan volumes returning to precrisis levels. Treasury sold its eight remaining securities in the portfolio for approximately $63.2 million in proceeds on January 24, 2012. That sale marked the wind down of this TARP program. In total, Treasury recovered $376 million through sales ($334 million) and principal and interest payments ($42 million) over the life of the SBA 7(a) Securities Purchase Program. After considering Treasury’s cost of financing, the SBA 7(a) Securities Purchase Program resulted in an income of approximately $4 million to taxpayers on Treasury’s original investment of $367 million (see fig. 12). As we noted in our 2012 annual TARP report, Treasury has addressed several staffing challenges that we had previously identified, and the overall staffing numbers, which began to decline in 2011, continued to decrease through September 30, 2012 (see fig. 13). Treasury’s Office of Financial Stability (OFS) used employees (including term employees) and detailees from other Treasury offices and other federal agencies to meet its workload requirements. OFS’s overall staffing numbers declined from 198 in 2011 to 163 in 2012, but staffing levels within individual OFS offices have fluctuated according to the resources needed. Many OFS staff were not replaced because their skill sets were no longer needed; for example, many staff in the Chief Investment Office were not replaced as the investment programs wound down. According to Treasury officials, Treasury evaluates departing staff on a case-by-case basis to determine whether a vacancy needs to be filled and whether present staff can cover the departing staff’s responsibilities, and only one new staff person was added in 2012. In addition, OFS officials stated that OFS had detailed some of its staff to other Treasury programs, as Treasury had exited several programs and no longer had assets to manage for them and many of the other TARP programs were winding down. Treasury officials continue to anticipate that staffing levels in OFS offices will decrease over time, and some staff have moved or may relocate to other parts of Treasury or other federal agencies. Treasury also has addressed several turnover-related staffing issues. We previously reported that a number of staff from the OFS leadership team departed in 2010 and 2011, and in 2013 the terms of two other leadership team members are scheduled to expire. As we previously reported, OFS addressed this leadership challenge by replacing the Assistant Secretary of Financial Stability with OFS’s former Chief Counsel in 2011 and replacing departing OFS leaders with existing OFS staff members (generally to term positions). We also reported that OFS had been addressing other staffing issues, including implementation of its staffing plan. Since TARP was established, Treasury has relied on the private sector to assist OFS with TARP administration and operations. Treasury engages with private sector firms through financial agency agreements, contracts, and blanket purchase agreements. According to OFS procedures, financial agency agreements are used for services that cannot be provided with existing Treasury or contractor resources. Specifically, Treasury has relied on financial agents for asset management, transaction structuring, disposition services, custodial services, and administration and compliance support for the TARP housing assistance programs. In addition, Treasury uses TARP contracts for a variety of legal, investment consulting, accounting, and other services and supplies. Through September 30, 2012, Treasury had awarded 19 financial agency agreements, 13 of which remained active, and awarded or used 131 contracts and blanket purchase agreements, of which about 40 percent remained active. As shown in table 3, the obligated value of the financial agency agreements and contracts totaled more than $900 million, with most of the funding going for financial agency agreements. The increase in obligations since 2010 is largely due to Treasury’s reliance on financial agents to support the oversight of TARP assets and the continued implementation of the housing programs over the last couple of years. Also, 3 of its financial agency agreements for transaction structuring and disposition services remained active. The vast majority of the financial agency agreement obligations shown above (approximately $525 million) are for Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac), which provide administrative and compliance services, respectively, for the TARP housing programs. The two largest contracts are $35 million with PricewaterhouseCoopers, LLP for internal control services and $17 million with Cadwalader, Wickersham & Taft, LLP for legal services. Treasury also has encouraged small and minority- and women-owned businesses to pursue opportunities for TARP contracts and financial agency agreements. The majority of these businesses participating in TARP are subcontractors. Treasury has taken a number of actions since 2008, in part in response to recommendations we made, to establish a structured system to manage potential conflicts of interest involving its contractors and financial agents. The system is based on a regulation Treasury issued in interim form in 2009 and final form in 2011 that prohibits retained entities from engaging in activities that create organizational or personal conflicts of interest without a waiver or mitigation under a Treasury-approved plan. The regulation sets forth standards to address actual and potential conflicts that may arise, establishes responsibilities for contractors and financial agents in preventing conflicts from occurring, and outlines Treasury’s process for reviewing and addressing conflicts. Treasury has developed and implemented a multifaceted process to manage and oversee potential conflicts of interest that is managed by OFS’s Office of the Chief Compliance Officer. The process includes reviewing proposed contracts and financial agency agreements, approving contractor and financial agent mitigation plans, responding to conflict-of-interest inquiries from contractors and financial agents, verifying that contractors and financial agents are regularly certifying that they are preventing or properly mitigating actual or potential conflicts of interest, and preparing feedback reports that provide a snapshot of how each contractor and financial agent is performing with respect to conflict- of-interest requirements. In addition, because the monitoring of conflicts of interest is based to some degree on self-reported information that contractors and financial agents submit, Treasury began conducting onsite design and compliance reviews in 2011. These reviews are designed to evaluate the effectiveness of contractors’ and financial agents’ internal controls and procedures for identifying and addressing conflicts of interest. In addition to the contacts named above, Dan Garcia-Diaz; Gary Engel; and William T. Woods (lead directors); Marcia Carlsen; Lynda Downing; Harry Medina; Joseph O’Neill; John Oppenheim; Raymond Sendejas; and Karen Tremba (lead assistant directors); Donald Brown; Emily Chalmers; Rachel DeMarcus; Sarah Farkas; John Forrester; Christopher Forys; Jackie Hamilton; Heather Krause; Risto Laboski; Aaron Livernois; John Lord; Marc Molino; Dragan Matic; and Erin Schoening have made significant contributions to this report. Treasury Continues to Implement Its Oversight System for Addressing TARP Conflicts of Interest. GAO-12-984R. Washington, D.C.: September 18, 2012. Troubled Asset Relief Program: Further Actions Needed to Enhance Assessments and Transparency of Housing Programs. GAO-12-783. Washington, D.C.: July 19, 2012. Foreclosure Mitigation: Agencies Could Improve Effectiveness of Federal Efforts with Additional Data Collection and Analysis. GAO-12-296. Washington, D.C.: June 28, 2012. Troubled Asset Relief Program: Government’s Exposure to AIG Lessens as Equity Investments Are Sold. GAO-12-574. Washington, D.C.: May 7, 2012. Capital Purchase Program: Revenues Have Exceeded Investments, but Concerns about Outstanding Investments Remain. GAO-12-301. Washington, D.C.: March 8, 2012. Management Report: Improvements Are Needed in Internal Control over Financial Reporting for the Troubled Asset Relief Program. GAO-12-415R. Washington, D.C.: February 13, 2012. Troubled Asset Relief Program: As Treasury Continues to Exit Programs, Opportunities to Enhance Communication on Costs Exist. GAO-12-229. Washington, D.C.: January 9, 2012 Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2011 and 2010 Financial Statements. GAO-12-169. Washington, D.C.: November 10, 2011. Troubled Asset Relief Program: Status of GAO Recommendations to Treasury. GAO-11-906R. Washington, D.C.: September 16, 2011. Troubled Asset Relief Program: The Government’s Exposure to AIG Following the Company’s Recapitalization. GAO-11-716. Washington, D.C.: July 28, 2011. Troubled Asset Relief Program: Results of Housing Counselors Survey on Borrowers’ Experiences with the Home Affordable Modification Program. GAO-11-367R. Washington, D.C.: May 26, 2011. Troubled Asset Relief Program: Survey of Housing Counselors about the Home Affordable Modification Program, an E-supplement to GAO-11-367R. GAO-11-368SP. Washington, D.C.: May 26, 2011. TARP: Treasury’s Exit from GM and Chrysler Highlights Competing Goals, and Results of Support to Auto Communities Are Unclear. GAO-11-471. Washington, D.C.: May 10, 2011. Management Report: Improvements Are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-11-434R. Washington, D.C.: April 18, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-476T. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Treasury Continues to Face Implementation Challenges and Data Weaknesses in Its Making Home Affordable Program. GAO-11-288. Washington, D.C.: March 17, 2011. Troubled Asset Relief Program: Actions Needed by Treasury to Address Challenges in Implementing Making Home Affordable Programs. GAO-11-338T. Washington, D.C.: March 2, 2011. Troubled Asset Relief Program: Third Quarter 2010 Update of Government Assistance Provided to AIG and Description of Recent Execution of Recapitalization Plan. GAO-11-46. Washington, D.C.: January 20, 2011. Troubled Asset Relief Program: Status of Programs and Implementation of GAO Recommendations. GAO-11-74. Washington, D.C.: January 12, 2011. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Years 2010 and 2009 Financial Statements. GAO-11-174. Washington, D.C.: November 15, 2010. Troubled Asset Relief Program: Opportunities Exist to Apply Lessons Learned from the Capital Purchase Program to Similarly Designed Programs and to Improve the Repayment Process. GAO-11-47. Washington, D.C.: October 4, 2010. Troubled Asset Relief Program: Bank Stress Test Offers Lessons as Regulators Take Further Actions to Strengthen Supervisory Oversight. GAO-10-861. Washington, D.C.: September 29, 2010. Financial Assistance: Ongoing Challenges and Guiding Principles Related to Government Assistance for Private Sector Companies. GAO-10-719. Washington, D.C.: August 3, 2010. Troubled Asset Relief Program: Continued Attention Needed to Ensure the Transparency and Accountability of Ongoing Programs. GAO-10-933T. Washington, D.C.: July 21, 2010. Management Report: Improvements are Needed in Internal Control Over Financial Reporting for the Troubled Asset Relief Program. GAO-10-743R. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Programs. GAO-10-634. Washington, D.C.: June 24, 2010. Debt Management: Treasury Was Able to Fund Economic Stabilization and Recovery Expenditures in a Short Period of Time, but Debt Management Challenges Remain. GAO-10-498. Washington, D.C.: May 18, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C.: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through September 25, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of September 18, 2009. GAO-10-24SP. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Debt Management: Treasury Inflation Protected Securities Should Play a Heightened Role in Addressing Debt Management Challenges. GAO-09-932. Washington, D.C.: September 29, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-920T. Washington, D.C.: July 22, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09-707SP. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008.
The Emergency Economic Stabilization Act of 2008 authorized Treasury to create TARP, a $700 billion program designed to restore liquidity and stability to the financial system and to preserve homeownership by assisting borrowers struggling to make their mortgage payments. The act also required that GAO report every 60 days on TARP activities in the financial and mortgage sectors. This report examines the condition and status of (1) nonmortgage-related TARP programs and (2) TARP-funded mortgage programs and Treasury's efforts to better ensure that servicers are implementing as intended two new requirements designed to improve interactions with borrowers (the MHA single point of contact and resolution of escalated cases requirements). To do this work, GAO analyzed audited financial data for various TARP programs; reviewed documentation such as program terms and agency reports on TARP programs; and interviewed Office of Financial Stability officials. Treasury generally agreed with the findings. Treasury, Ally Financial, and General Motors provided technical comments that GAO incorporated, as appropriate. As of September 30, 2012, the Department of the Treasury (Treasury) was managing assets totaling $63.2 billion in nonmortgage-related Troubled Asset Relief Programs (TARP). As of this date, Treasury had exited 4 of the 10 nonmortgage-related programs, and in December 2012 Treasury announced the exit from a fifth program--the American International Group (AIG) Investment Program. Exactly when Treasury will exit the remaining five programs remains uncertain. Treasury has identified several factors that will affect its decisions. For example, for the Capital Purchase Program (CPP, created to provide capital to financial institutions), the financial condition of the participating institutions and the success of auctions; for the Community Development Capital Initiative (CDCI, created to provide capital to credit unions and financial institutions in underserved communities), which Treasury has not yet decided to exit, the financial condition of the participating institutions and the rate at which the institutions repay Treasury; and for the Automotive Industry Financing Program (AIFP, created to prevent a significant disruption of the American automotive industry). Some programs, such as CPP, have yielded returns that exceed the original investments. Others, such as CDCI and AIFP, have not. Unlike the nonmortgage-related TARP programs, TARP-funded mortgage programs, which focus on mitigating foreclosures, are ongoing, and Treasury's oversight of new requirements designed to improve servicers' interactions with borrowers showed both challenges and improvements. Treasury allocated $45.6 billion in TARP funds to three programs, including Making Home Affordable (MHA), but more than $40 billion of the funding has not yet been disbursed, and the programs have not reached the expected number of borrowers. The centerpiece of MHA is the Home Affordable Modification Program, which has provided about 1.1 million permanent modifications to borrowers. To help ensure that homeowners receive appropriate assistance from servicers under this and other MHA programs, since September 2011 Treasury has required servicers to identify a "relationship manager" to serve as the homeowner's single point of contact throughout a delinquency or imminent default resolution process. GAO found that Treasury's initial reviews of servicers' implementation of this requirement had identified some inconsistencies. However, oversight of a second requirement designed to improve the resolution of borrower inquiries and disputes (escalated cases) showed that the nine largest servicers had met the performance target. Treasury officials said that the MHA program administrator, Fannie Mae, handled oversight of the escalation process and the vendors who supported in keeping with Treasury's guidelines.
The Weed and Seed Program is a DOJ discretionary grant program that provides funding to community grantees to help prevent and control crime and improve the quality of life in targeted high-crime neighborhoods across the country. It is a joint federal, state, and local program for coordinated law enforcement and neighborhood reinvestment. Program funding is to support Weed and Seed grantee neighborhood sites and to provide training and technical assistance. The Weed and Seed Program has grown dramatically since it began in fiscal year 1991 with three pilot sites and a relatively small investment of federal resources. For example, between fiscal years 1995 and 1998, the number of Weed and Seed sites increased from 36 to 177, while the total annual program budget increased (in constant 1998 dollars) from about $34 million to $43 million. In addition, during the same time period, the average grant awarded per site decreased (in constant 1998 dollars) from about $786,000 to $260,000. In fiscal year 1999, with a budget of $49 million, DOJ plans to award grants to about 200 Weed and Seed sites. See appendix I for a map showing the locations and numbers of Weed and Seed sites funded in fiscal year 1998. EOWS is responsible for the national management and administration of the Weed and Seed Program, including developing policy and providing federal guidance and oversight. EOWS currently administers the Weed and Seed Program with a staff of 4 management officials, 12 grant monitors, 7 support staff, 2 detailees, 3 contractors, and 4 interns. Before interested communities can apply for a Weed and Seed grant, they must first be approved for official recognition by EOWS. Official recognition requires the U.S. Attorney in the area where the Weed and Seed site is to be located to organize a local steering committee. The steering committee, which can be made up of various federal, state, and local representatives, including residents, is responsible for local administration of the program. For official recognition, a site is also required to develop a management plan, engage residents and other partners in its activities, and develop a comprehensive strategy to weed out crime and gang activity and to seed the area with social services, economic services, and economic revitalization. The four required elements of the Weed and Seed Program are (1) law enforcement; (2) community policing; (3) crime and substance prevention, intervention, and treatment; and (4) neighborhood restoration. According to EOWS, law enforcement should attempt to eliminate the most violent offenders by coordinating and integrating the efforts of federal, state, and local law enforcement agencies in targeted high-crime neighborhoods. The objective of community policing is to raise the level of citizen and community involvement in crime prevention and intervention activities. Crime and substance abuse prevention, intervention, and treatment should include youth services, school programs, community and social programs, and support groups. Finally, neighborhood restoration should focus on distressed neighborhoods through economic and housing development. Weed and Seed sites fund a variety of law enforcement and community activities. For example, law enforcement-funded activities ranged from participation in a multijurisdictional, interagency, antidrug task force to conducting bike and foot patrols in the community. To assess how EOWS manages the Weed and Seed Program, we reviewed (1) the criteria used to determine which new and existing sites should be qualified for funding and (2) the policies and guidance that EOWS provides to applicants. To gather this information, we interviewed officials from DOJ and EOWS and reviewed pertinent documents, including guidance set forth in the Weed and Seed Program Implementation Manual, official recognition and grant applications, and budget reports. In addition, we judgmentally selected 12 of 70 fiscal year 1999 official recognition files for review. These 12 files included 3 files from each of the 4 categories that EOWS used in making their official recognition determinations. Further, we reviewed the fiscal year 1999 qualification funding decisions for the 177 sites that were in existence in fiscal year 1998. To assess how EOWS monitors grant use, we reviewed EOWS program grant guidance, the EOWS monitoring guide to be used by grant monitors when conducting site visits, and the grant files for the five Weed and Seed sites that we visited: Atlanta, GA; Dyersburg, TN; Philadelphia, PA; San Diego, CA; and Woburn, MA. We judgmentally selected these 5 sites from the 177 sites funded in fiscal year 1998 (1) to obtain a mix of geographic locations, populations, and lengths of time in existence and (2) on the basis of our discussions with EOWS management. These locations were not selected to be representative of all Weed and Seed sites. We also reviewed selected site visit monitoring reports prepared by grant monitors for these sites and quarterly financial status reports and biannual progress reports submitted in fiscal year 1998. We interviewed EOWS management officials, grant monitors, and coordinators at these five sites regarding procedures used for monitoring Weed and Seed sites. To assess how EOWS determines when sites have become self-sustaining and how EOWS and selected sites are measuring the success of their Weed and Seed activities, we performed site visits at the five Weed and Seed locations previously cited. We also surveyed, by mail, the 87 sites that had been awarded Weed and Seed grants since September 30, 1996. We received usable responses from 74 of the 87 sites, or 85 percent. Our questionnaire asked Weed and Seed site coordinators to provide current information, by January 29, 1999, about their sites, such as (1) actions taken to become self-sustaining, (2) partnerships or cooperative arrangements established with other entities, and (3) performance indicators used to measure the sites’ success. See appendix II for a copy of the questionnaire, including responses. In developing the questionnaire, we asked EOWS management officials to review several drafts of the document. In addition, we pretested the questionnaire by telephone with several Weed and Seed site coordinators. We conducted the survey from January to April, 1999. To determine the performance indicators currently in place and their adequacy in measuring program success, we interviewed officials from EOWS and the five sites that we visited. We also reviewed pertinent documents, including EOWS policies and guidance, grant applications, and data collected pursuant to the Government Performance and Results Act of 1993 (GPRA) and from our survey results. We requested comments on a draft of this report from the Attorney General of the United States and the Director of the Executive Office for Weed and Seed. On June 23, 1999, we met with the Deputy Assistant Attorney General and Comptroller, Office of Justice Programs (OJP), and the Director, EOWS, and members of his staff to discuss the draft report. The Assistant Attorney General provided written comments on the draft report on July 1, 1999, which are discussed near the end of this letter and reprinted in appendix IV. We did our audit work between October 1998 and May 1999 in accordance with generally accepted government auditing standards. EOWS does not have an adequate internal control requiring that new and existing site qualification for funding decisions always be fully documented. Because of this, EOWS cannot ensure that it is making the best allocation of available funds when it makes these decisions. The Comptroller General’s guidance on internal controls in the federal government, Standards for Internal Controls in the Federal Government, requires that these systems and all transactions and significant events are to be clearly documented, and that the documentation is to be readily available for examination. Documentation of transactions or other significant events should be complete and accurate and should facilitate tracing the transaction or event and related information from before it occurs, while it is in process, to after it is completed. EOWS’ new site funding qualification decisions were not always fully documented. EOWS management officials were able to provide us with some documentation for 12 of the 70 fiscal year 1999 new site funding qualification decisions we reviewed. However, for 5 of these 12 decisions we identified inconsistencies between the documentation and the decisions. The available documentation was insufficient for us to determine how these inconsistencies were reconciled. Therefore, we could not determine the basis and rationale for these five decisions. The first step in the new site funding qualification process is for EOWS to officially recognize a site’s eligibility to apply for formal involvement in the Weed and Seed Program. According to EOWS management officials, in fiscal year 1999, they created a new official recognition process, which evolved from approving all applicants, to creating a competitive process under which all applicants would not be approved. As part of this new process, EOWS management officials said they were to consider recommendations made by external consultants and EOWS grant monitors. They also were to consider the number of sites already funded within the U.S. Attorney’s district, the extent of support provided by that U.S. Attorney’s office to those sites, and insights obtained from the U.S. Attorneys for applications that met or almost met all official recognition requirements. For fiscal year 1999, EOWS received applications for official recognition from 70 potential sites, and it approved 27 sites. The 27 sites were invited to apply for fiscal year 1999 funding contingent upon the completion of all official recognition requirements. We reviewed 12 of the 70 fiscal year 1999 official recognition files, and, for 5 of the site qualification decisions, we identified inconsistencies among the external consultant recommendations, grant monitor recommendations, and EOWS management decisions. The available documentation was insufficient for us to determine how these inconsistencies were reconciled. Therefore, we could not determine the basis and rationale for the decisions. For example, documentation for two of the files showed that the external consultants and EOWS grant monitors had recommended that the sites not be officially recognized, but EOWS management had approved the sites. According to EOWS management officials, these approvals were granted on the basis of additional information provided by the local U.S. Attorneys; however, this additional information was not documented by EOWS. EOWS did not always fully document how it made its decisions on whether to qualify the 177 existing sites for continued funding and special projectfunding. Although EOWS officials could provide us with documentation for some of the information considered for existing sites, such as unspent grant award balances and compliance with reporting requirements, this documentation was not sufficient for us to determine the basis and rationale for the decisions to qualify 164 of the 177 existing sites for continued funding. EOWS, however, documented the basis and rationale for the 13 sites that it decided to disqualify for continued funding. In addition, EOWS could not provide us with documentation regarding how it made its special project funding qualification decisions. Since fiscal year 1991, the total annual Weed and Seed Program’s budget has increased (in 1998 constant dollars) from about $589,000 to about $49 million. In addition, the number of Weed and Seed Program grant awards has grown dramatically since fiscal year 1995, while the average grant has decreased substantially. For example, in fiscal year 1995, EOWS awarded grants to 36 sites, with an average grant of about $786,000 (in 1998 constant dollars). In fiscal year 1998, however, EOWS awarded grants to 177 sites, with an average grant award of $260,000. See table 1 for fiscal years 1991-99 data on the Weed and Seed Program, including EOWS budget and average site funding history. For fiscal year 1999, EOWS management officials decided for the first time not to qualify for funding all existing sites that met grant requirements. In fiscal year 1999, EOWS decided to disqualify for funding 13 of the 177 sites that were funded in fiscal year 1998. EOWS officials developed a site analysis matrix to assist them in deciding which sites to qualify for funding. This matrix contained information about all 177 sites, such as unspent grant award balances over $350,000 and each site’s compliance with DOJ’s reporting requirements. According to EOWS management officials, in making their final decisions they also considered the recommendations made by EOWS grant monitors and their own personal knowledge of the sites. For the 13 sites that were disqualified for funding in fiscal year 1999, EOWS documented the basis and rationale for these decisions by sending a letter to each site describing the reasons for its decision. However, from our review of the available documentation for the remaining 164 sites, this documentation was insufficient to determine the basis and rationale for these qualification decisions. For example, in fiscal year 1999, one site was qualified for funding even though it had a grant award balance of over $350,000 and the EOWS grant monitor had recommended that the site not receive funding. Two other sites were also qualified for funding for fiscal year 1999 even though they had grant award balances over $350,000 and had not filed all of the required financial and progress reports. Further, the EOWS grant monitor recommended that one of these sites not receive fiscal 1999 funding due to its delays in spending its first two awards. According to his report, “the grantee is so far behind that a year without funding will allow them to catch up and be on track again.” EOWS management officials told us their decisions to qualify these sites for funding was based on their personal knowledge of these sites’ activities. However, we were not able to determine the basis and rationale for these decisions because they were not documented in the information provided to us by EOWS. EOWS has also qualified existing sites to receive funding for special projects. For example, in fiscal year 1998, EOWS qualified sites for funding of $1,043,334 for the Mobile Community Outreach Police Stations (MCOPS); $1,000,000 for the Kids Safe Program; and $539,797 for Kids House. Since written procedures for qualifying sites for special projects had not been developed and the basis and rationale for these decisions had not been documented, we could not determine how these decisions were made. EOWS management officials told us that they made these decisions on the basis of what they perceived as the needs of particular Weed and Seed sites after contacting the sites and speaking with EOWS grant monitors. See table 2 for a summary of EOWS’ funding allocations for fiscal year 1998. EOWS did not always ensure that local Weed and Seed sites complied with critical grant requirements. For example, on the basis of our review of the site analysis matrix provided to us by EOWS, almost one-half of the 177 existing sites that were funded in fiscal year 1998 had not submitted all of the required progress reports. In addition, EOWS grant monitors did not always document the results of their site visits as required by EOWS guidance. EOWS requires semiannual progress reports describing site activities during the reporting period and the status or accomplishment of program objectives. According to EOWS officials, progress reports are an important tool to help EOWS management officials and grant monitors determine how sites are meeting program objectives and to assist them in making future grant qualification decisions. Our review of the EOWS site analysis matrix showed that as of December 1998, 80, or 45 percent, of the 177 sites had not submitted these required progress reports. In addition, EOWS requires the sites to provide program data, such as crime statistics and safe haven program attendance, to assess program results. Our review of the EOWS site analysis matrix showed that as of December 1998, 20, or 11 percent, of the 177 sites had not submitted the required data. Further, according to the EOWS’ monitoring guide, grant monitors are to conduct site visits every 18 months and monitor Weed and Seed sites’ compliance with grant requirements through desk reviews, technical assistance, and telephone contacts on a continuing basis. The guide instructs grant monitors to prepare a site visit report. According to EOWS officials, documentation of these visits is an important tool for EOWS grant monitors to convey to EOWS management officials how well sites are complying with grant requirements and EOWS to use in making existing site funding qualification decisions. According to EOWS management officials, the grant monitors have not always documented their site visits due to the large number of sites they are responsible for monitoring—as many as 23 sites per monitor. EOWS management officials said that they hired four additional grant monitors in fiscal year 1999, which should decrease the number of sites that each grant monitor is responsible for monitoring. An important goal of the Weed and Seed is the self-sustainment of local Weed and Seed sites through the leveraging of additional resources from non-EOWS sources. However, EOWS has not developed criteria to determine (1) when sites have become self-sustaining and (2) when to reduce or withdraw Weed and Seed grant funds. Although many grantees have received Weed and Seed funding for several years, EOWS has not reduced or withdrawn any Weed and Seed grantee’s funds because of progress their site’s had made toward the goal of becoming self-sustaining. Although EOWS does not know what progress sites have made toward self-sustainment, most of the sites we visited and surveyed reported making efforts toward that goal. While self-sustainment is an important goal of the Weed and Seed Program, EOWS has not developed specific criteria to determine when sites have become self-sustaining or determined the progress sites had made toward achieving this goal. The EOWS Executive Director and EOWS documents stated that a critical goal of the program is for sites to become self-sustaining by leveraging Weed and Seed grant funds with resources from other public and private sources. In 1995, the DOJ Inspector General reported that the Weed and Seed Program was founded on the premise that federal funding would continue for a finite period after which a Weed and Seed site would be self-sustaining. We identified partnerships at each of the five sites we visited that resulted in the leveraging of additional resources for these sites. For example, at one site, the city police department and the city school system each provided a staff member to fill Weed and Seed administrative positions as a part of their other duties so that Weed and Seed funds could be used for other purposes and not spent on funding for administrative positions. At another site, a local business donated computers to be used in computer classes for children. Most of the sites that responded to our survey indicated that they had developed partnerships and arrangements with other groups to move toward the goal of becoming self-sustaining. Of the 74 sites responding to our survey, 72 indicated that they had developed partnerships or cooperative arrangements with other government or nongovernment groups. For example, 59 sites responded that they had developed partnerships with local government agencies, while 54 indicated that they had developed such arrangements with nonprofit agencies. Some respondents reported establishing partnerships with various groups, such as the Department of Housing and Urban Development, a state public health department, city parks and recreation departments, and local businesses. EOWS does not have criteria for determining whether or the extent to which a site has become self-sustaining and whether funds could be reduced or withdrawn. EOWS management officials said that, to date, no site’s funding has been reduced or withdrawn as a result of the site’s efforts to become self-sustaining. In addition, these officials said that they were reluctant to reduce or withdraw funding because of a concern that sites may not continue to implement the Weed and Seed Program. Although EOWS has not developed criteria to reduce or withdraw sites’ funding if they were to become self-sustaining, EOWS management officials said that beginning in the Year 2000, they would require sites to reapply for official recognition every 5 years and would encourage them to expand to additional sites. According to EOWS management officials, this new policy, which was made during the course of our review, is intended to determine whether sites still need funding. To obtain official recognition, sites must describe intended partnerships with other federal, state, and local governments and private sector agencies to leverage additional resources. For example, a site would be required to stipulate the level of resources that are committed by its partners. However, without criteria to determine when sites become self-sustaining, EOWS does not have a basis or rationale for determining when to reduce or withdraw sites’ funds. EOWS has developed various performance indicators, in an attempt to respond to GPRA. GPRA seeks to shift the focus of federal management and decisionmaking away from activities performed, to focusing on results or outcomes of activities undertaken. However, the indicators EOWS used to measure the success of the Weed and Seed Program still generally track activities rather than results or outcomes. Weed and Seed sites also used other indicators to measure the results of their individual programs, but these indicators also primarily measured activities, not outcomes. While the performance indicators were generally not sufficient to adequately measure program results, most of the local officials and residents with whom we spoke during our site visits were very satisfied with the activities funded by the local Weed and Seed programs. In an attempt to measure the results of sites’ weeding efforts, EOWS tracks law enforcement information, such as community-policing activities. EOWS requires each site to have a community-policing component to its program. Community policing involves law enforcement working closely with community residents to develop solutions to violent and drug-related crime and serves as a stimulus for community mobilization. Before 1999, EOWS tracked officer duty time spent in the Weed and Seed area; the percentage of police officer duty hours funded by Weed and Seed; certain serious crimes, such as violent and property crimes; and the number of arrests. Recently, EOWS management officials decided to eliminate the reporting of all of these crimes, except for homicides, because they believed that doing so would improve the accuracy and reliability of the data reported by reducing the amount of data collected by Weed and Seed sites. In addition, EOWS currently requires sites to report whether they have (1) foot patrols, (2) bike patrols, (3) police substations, (4) crime watches, and (5) police participation in community meetings. Although these indicators are useful in tracking the types of weeding activities engaged in at the local sites, they generally do not measure outcomes. To measure the results of seeding activities, EOWS tracks safe haven program attendance. Before 1999, EOWS tracked the total number of people who attended the safe haven program over a 6-month period, but EOWS recently reduced the tracking period to 1 week a year. EOWS management officials said that they made the above changes to better measure the results of both weeding and seeding activities. However, these indicators still generally measure activities rather than results. For example, EOWS tracks the number of people who attended safe havens rather than assessing program results from these safe havens, such as attendees’ academic improvement after completing a tutoring program provided at the safe haven. The responses to our survey also show that the performance measures used by individual sites generally tracked activities, not results. While most sites reported that they have their own measures of success, these measures varied widely, including counting the number of newspaper articles about their Weed and Seed site and recording the number of drug- related cases prosecuted. The three most commonly reported measures of success by survey respondents were crime statistics, the number of participants in Weed and Seed-sponsored activities, and the level of community involvement. Further, 12 sites conducted surveys to gain the perspective of community residents, and 4 sites reported on recidivism rates. Using crime statistics and recidivism rates as performance measures could be useful. However, these measures can also present some methodological challenges because it is difficult to draw a direct causal link between crime or recidivism rates and Weed and Seed Program activities. For example, other explanations for crime rate fluctuations, such as economic trends and other law enforcement initiatives, could also be responsible for the observed outcomes. Therefore, if these measures are used, any analysis that attempts to draw the causal link should attempt to control for alternative explanations. From the information provided to us by Weed and Seed sites, it remains unclear whether sites that measure crime and recidivism rates controlled for other factors that may have contributed to changes in these rates. A recently released study was conducted by Abt Associates Inc. for DOJ on the effectiveness of the Weed and Seed Program. This study involved eight Weed and Seed Program sites and, among other activities, attempted to measure crime trends at each site. Overall, the study indicated mixed results across the sites—there were significant favorable effects in the key outcome measures used in the Abt study for some cities and some time periods, while the results on outcome measures in other cities were not as favorable. The study acknowledged the difficulty in drawing a causal link and noted that the evidence is modest in terms of statistical significance. Even though the performance indicators were not sufficient to adequately measure program results, most of the local officials with whom we spoke during our site visits were very satisfied with the activities funded by the local Weed and Seed programs. These officials, such as mayors, city administrators, U.S. attorneys, and high-ranking police officers, noted that the key ingredient to the Weed and Seed programs’ success was the commitment of the mayors’ and U.S. Attorneys’ offices and civic and business leaders. Local sites funded a wide variety of law enforcement and community activities to implement the Weed and Seed strategy. Law enforcement- funded activities ranged from participation in a multijurisdictional, interagency, violent crime task force to community bike and foot patrols. Community-funded activities ranged from sponsoring a Black History Month program at a local high school to providing life-skills counseling to at-risk youths. During our visits to selected Weed and Seed sites, we observed many different types of activities. These activities ranged from community police substations or ministations to court-ordered community service for youths. Appendix III describes our site visits and illustrates the many types of activities funded at these sites. Good internal controls are essential to achieving full accountability for the resources made available for the Weed and Seed Program. However, EOWS lacks an adequate internal control that requires that the basis and rationale for new and existing Weed and Seed site qualification for funding decisions always be fully documented. In addition, EOWS has not always ensured, through its grant monitoring process, that site progress reports— a grant requirement—were submitted or that grant monitors documented their site visits. Through our survey and site visits, we identified some leveraging efforts made by Weed and Seed sites. Many of these efforts appeared to be leading toward the self-sustainment of some Weed and Seed sites. However, while the objective of sites’ becoming self-sustaining is a critical program goal, EOWS had yet to establish criteria for determining when sites should be classified as self-sustaining and when to reduce or withdraw funding. Although current performance measures address a variety of activities taking place at Weed and Seed sites, these measures generally are not adequate to judge program success. While EOWS has made some changes to the way that it measures program effectiveness, these indicators still generally track activities, not program outcomes. We recognize that it is difficult to precisely measure the results of this type of community-based program or strategy. However, better performance indicators as well as other indicators, such as compliance with grant requirements, would help EOWS make more informed program decisions, such as whether to continue funding existing sites. We recommend that the Attorney General of the United States direct the Director of the Executive Office of Weed and Seed to develop an adequate internal control to ensure that the basis and rationale for new and existing site qualification for funding decisions are always fully documented; improve program monitoring to ensure that sites meet the grant requirement of submitting progress reports, and that EOWS site visits are documented; develop criteria for determining when sites are self-sustaining and when to reduce or withdraw program funding; and develop additional performance measures that track program outcomes. DOJ generally agreed with most of the recommendations presented in the report and offered additional information to explain the status of the current situation, as well as additional actions it plans to take. DOJ also provided technical comments that we have incorporated as appropriate. DOJ agreed with our recommendation for an adequate internal control to ensure that the basis and rationale for new and existing site qualification decisions are always fully documented. They provided some additional information on the internal controls for OJP’s formal grant award processes. For example, they described processes currently in place to ensure that grants are awarded in accordance with Office of Management and Budget and OJP policies. While this information provided a framework for OJP financial controls, it did not specifically relate to our recommendation. Our internal control review focused on EOWS’ decisions for qualifying new and existing sites for funding. DOJ agreed with our recommendation to improve program monitoring, citing that it has a chronic problem of grantees not submitting programmatic progress reports in a timely manner. To address this problem, EOWS is proposing to suspend funding for grantees failing to submit progress reports in a timely manner. Because this new proposal has yet to be implemented by EOWS, we believe our recommendation to ensure that sites meet the grant requirement of submitting timely progress reports is appropriate. In addition, EOWS acknowledged the need to document all monitoring visits. After they received our draft report, they told us they had taken corrective action, and all monitoring reports are now up to date. However, there is no assurance that a process and procedures are in place to ensure that monitoring visits will always be documented, and we continue to believe that our recommendation is needed. DOJ disagreed with our recommendation on self-sustainability, stating that developing criteria is problematic. They also commented that the draft report was incorrect in stating that no site’s funding had been reduced or withdrawn as a result of the site’s efforts to become self-sustaining, and that we used the terms “site” and “grantee” incorrectly. DOJ maintains that, as one neighborhood reached a point where it could sustain its Weed and Seed crime-reduction efforts, funds and resources were shifted by the grantee to other neighborhoods. With respect to self-sustainability, there is a distinction to be drawn between DOJ’s comments and evidence we gathered from interviews with program officials and our own observations. We acknowledge that some grantee funds and resources have been shifted to other neighborhoods within the grantee’s location. However, it is not clear whether this occurred because the programs became less reliant on Weed and Seed grants or for other reasons. EOWS management and local program officials told us that funding had been reduced at some sites to fund activities in other neighborhoods, but not because the site demonstrated that it successfully reached self-sustainability. Our limited site visits confirmed this at the locations we selected for review. In an attempt to create criteria for achieving self-sustainability, EOWS adopted a 5-year rule under which it can discontinue qualifying sites for continued program funding unless the sites expand to an additional neighborhood site. EOWS expressed the opinion that this rule has created an expectation of self-sustainability for current sites, since some funds are to be shifted from the current neighborhood site to the expansion site. We continue to believe that EOWS needs to develop better criteria for determining when sites become self-sustaining and when to reduce or withdraw program funding. Under EOWS’ current 5-year rule, even if some resources are shifted to an expansion site, there still may be substantial Weed and Seed investment at the original site and EOWS would have no way of knowing whether the original site is self-sustaining. Withdrawing funding after 5 years of federal investment without criteria could be arbitrary. Some sites may become self-sustaining sooner than 5 years—resulting in a missed opportunity to fund other Weed and Seed sites—while other sites may need more than 5 years to achieve self- sustainability. While it may be challenging to develop criteria for determining when a site becomes self-sustaining, we believe EOWS should work toward this goal since it is a central and fundamental tenet of the Weed and Seed Program. With respect to the distinction between sites and grantees mentioned in EOWS’ comments, we have modified the report to clarify when we are referring to a grantee or a site. DOJ officials agreed with our recommendation to develop performance measures that track program outcomes. However, they noted that EOWS already has one performance measure in place—homicides—that it uses to track program outcomes. Consequently, they believed that our recommendation should be modified to state that EOWS should develop and use additional performance measures. We recognize that EOWS has adopted this outcome-oriented performance measure and have modified our recommendation to require EOWS to develop additional measures. DOJ also expressed concern that we did not include the results of a recently completed national evaluation of the Weed and Seed Program by Abt Associates Inc. As noted in Abt’s report, this evaluation involved case studies of eight Weed and Seed sites. Among other activities, each case study included two principal sources of empirical data, as follows: (1) analysis of crime trends at each site and (2) surveys of site residents, one conducted in 1995 and the other in 1997. Overall, the report indicated mixed results across the sites—there were significant favorable effects in key outcome measures for some cities and some time periods, while the results on outcome measures in other cities were not as favorable. The report noted that the evidence is modest in terms of statistical significance. Finally, DOJ stated that our report did not provide adequate insight into the findings of our site visits and mail surveys. However, in our results in brief section, we note the satisfaction that most local officials we spoke with had with the activities funded by Weed and Seed. These results are discussed in greater detail in the body of this report. Our survey results, in their entirety, are included as appendix II. In addition, the details of each of our five site visits are included in appendix III. We are sending copies of this report to the Honorable Strom Thurmond, Chairman, and the Honorable Charles Schumer, Ranking Minority Member, Senate Subcommittee on Criminal Justice Oversight. We are also sending copies of this report to the Honorable Harold Rogers, Chairman, and the Honorable Jos E. Serrano, Ranking Minority Member, House Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies; the Honorable Bill McCollum, Chairman, and the Honorable Robert C. Scott, Ranking Minority Member, House Subcommittee on Crime; and the Honorable Janet Reno, Attorney General. We will make copies available to others upon request. The major contributors to this report are acknowledged in appendix V. If you or your staff have any questions on this report, please call me on (202) 512-8777. Atlanta, GA, has been a Weed and Seed site since 1992. Atlanta’s target area includes two public housing developments, Thomasville Heights and Capitol Homes; their immediate surrounding areas; and a third community, Mechanicsville. In fiscal year 1998, the total population of the two public housing communities was 2,150, mainly African-American females with a median age of 23 to 28 years. Ten percent of the total population was on felony probation, and an additional 150 adults were under parole supervision. Mechanicsville was characterized as single-family homes surrounding a public housing community. Atlanta’s Weed and Seed goals include to (1) reduce drug sales, drug trafficking activities, and drug-related violent crimes; (2) develop conflict resolution and prevention resources to reduce the incidence of violence in target communities; (3) provide creative options for young people to allow them alternatives to drinking and using drugs; (4) increase public safety awareness through antivictimization techniques; and (5) strengthen relationships with the communities to increase the number of reported crimes and assist in developing intelligence information for undercover use. This project site is initiating a multiagency program to coordinate the delivery of criminal justice and social services to eliminate violent crime, drug trafficking, and drug-related crime and to provide a safe environment for law-abiding citizens to live, work, and raise a family. Since fiscal year 1992, the Atlanta Weed and Seed program has been awarded about $3.7 million comprised of grant and asset forfeiture funds. As of December 31, 1998, the Atlanta Weed and Seed program had used about $3 million. Grant awards ranged from a high of about $754,000 in fiscal year 1993 to a low of $175,000 in fiscal year 1998. Asset forfeiture funds were awarded in 5 years and ranged from a high of about $268,000 in fiscal year 1994 to a low of about $51,000 in fiscal year 1997. See table III.1 for the funding history of the Atlanta Weed and Seed site. Table III.1: Atlanta Weed and Seed Site’s Funding History—FY 1992-99 (1998 Constant Dollars) Award is pending. Atlanta’s weed effort includes the following five-phase approach to reaching program goals: (1) community policing as an overall philosophy and as an institution; (2) intelligence collection and database preparation; (3) investigation; (4) arrests, seizures, and custody; and (5) incarceration and prosecution. The seed effort seeks to develop multiagency community participation in substance abuse prevention and intervention activities. See table III.2 for examples of the types of activities funded by the Atlanta Weed and Seed program, listed by program element. U.S. Attorney’s Office, U.S. Department of the Navy (USN) Spring Break— Together Everyone Achieves More (TEAM) Description This joint operation targets street level dealers, gang members, and sources of supply to disrupt and dismantle drug and violent crime gangs preying on target communities by using intelligence and criminal information from sources within and outside the community. In addition, the Weed task force is to refer cases involving weapons to the U.S. Attorney’s office for consideration of federal prosecution. DEFY is a mentoring program adopted by the Department of Justice (DOJ) for Weed and Seed in 1996. DEFY is to be a comprehensive program that emphasizes the positive development of the mind, body, and spirit. The Weed Task force sponsored the first annual Spring Break TEAM building camp. Students from the target site spent 2 intensive days with sports figures, HUD/OIG agents, law enforcement officers, and conflict resolution advocates. The Ballethnic dance outreach program offers prevention through the arts to students in the elementary and middle schools. Viewpoint, Inc. Viewpoint, Inc., provides family/community prevention workshops to the target areas. A residential treatment component is offered to 20 residents with 3 months of aftercare as an integral part of the recovery process. The three communities completed a 12-week curriculum of Teens, Crime, and Community that was conducted by Americorp students under the guidance of the Victim Witness Assistance Program. Youths then select community projects. For example, Mechanicsville youths identified the UJAMAA Cookie Corporation as their project and have purchased equipment necessary for their business operation. Atlanta’s Weed and Seed program officials stated that an important goal for their program is to leverage additional resources from non-Executive Office for Weed and Seed (EOWS) sources to become self-sustaining. During the course of our site visit, we identified several partnerships established by the Atlanta Weed and Seed program to leverage additional resources. These cooperative arrangements involved partners such as the United States Navy and the Georgia Bureau of Investigation. Table III.3 illustrates examples of leveraging efforts that were identified through our survey and site visit. Table III.3: Examples of Atlanta Weed and Seed Site’s Leveraging Efforts Type of partnership/cooperative arrangementFederal government Name of partner/cooperative arrangement USN Georgia Bureau of Investigation (GBI) Fulton County Sheriff’s Department Viewpoint, Inc. Pyramid Communications Systems, Inc. Description USN is to host the DEFY Summer Camp—a youth outreach program intended to promote positive life choices in 9-12 year olds through use of role models and education. HUD/OIG agents work with the Weed task force to investigate crimes occurring in and around public housing developments and assist in the prosecution of individuals involved in criminal activity. Agents assist with the execution of warrants involving residents of public housing. GBI provides Drug Awareness and Resistance Education (DARE) instructor training to Atlanta police officers and other law enforcement officers who are dedicated to the Atlanta Weed and Seed Project. In addition, it provides manpower support to the Atlanta Weed Task Force and share intelligence relating to criminal activities in or affecting the Weed and Seed neighborhoods. Deputy sheriffs provide junior deputy training in Weed and Seed neighborhoods as well as at the Safe Haven Summer Program. In addition, Fulton County deputies conduct TEAM building camp during public school spring break. Viewpoint conducts community/family education and prevention workshops for the three Weed and Seed communities. In addition, Viewpoint is to provide a maximum of 20 slots for Weed and Seed residents identified as needing residential treatment at their residential care facilities. Pyramid Communication Systems (in partnership with Atlanta University’s Economic Development Center) assists in the development and implementation of business plans for the cookie collaborative in Mechanicsville, the concession store for Capitol Homes, and the employment placement firm in Thomasville Heights. SITE’S PERFORMANCE MEASURES To date, Atlanta has not developed site-specific indicators to measure the results of its program. However, officials said that under the leadership of the Mayor’s office, they have developed a detailed weeding strategy that sets forth overall goals and roles of the community, law enforcement, and prosecution and have detailed innovative ideas for consideration. Specific measures of success to be linked to these goals are under consideration. According to the U.S. Attorney for the Northern District of Georgia, a seeding strategy has not yet been developed. Dyersburg, TN, a small rural community in northwest Tennessee with a population of about 23,000, was officially recognized as a Weed and Seed site in February 1996 and received its first year grant award in September 1996 (see table III.4). When the program began, two target neighborhoods were involved; now the site has expanded into four target neighborhoods. The steering committee used the following criteria to select target neighborhoods: (1) an increase in drug trafficking and potential for street gang activity, (2) an increase in crime statistics indicating violence, (3) juvenile crime rates, (4) a lack of adequate employment opportunities, (5) truancy and school drop-out rates, and (6) the potential for residents’ involvement in and commitment to the program. Since fiscal year 1996, the Dyersburg Weed and Seed program was awarded about $734,000, comprising grant and asset forfeiture fund awards. As of December 31, 1998, the Dyersburg Weed and Seed program had used about $563,000. Grant awards ranged from a high of $275,000 in fiscal year 1998 to a low of about $129,000 in fiscal year 1996. The Dyersburg Weed and Seed program received one asset forfeiture fund award in fiscal year 1996 of about $103,000. See table III.4 for the funding history of the Dyersburg Weed and Seed site. Table III.4: Dyersburg Weed and Seed Site’s Funding History—FY 1996-99 (1998 Constant Dollars) Award is pending. Dyersburg provides a variety of Weed and Seed activities for children, youth, and adults at its safe haven, which is coordinated through the Dyersburg City Community Resource Center. Table III.5 shows examples of the types of activities funded by the Dyersburg Weed and Seed program, listed by program element. Description System to link the communications systems of Dyer County law enforcement, fire, and ambulance systems. Expedite juvenile offenders court adjudication. Site reported that by quicker adjudication it noted a substantial decrease in the number of juvenile cases. Academy to familiarize its citizens with the police department, its personnel, its goals, and the way it operates. Day camp for children, including breakfast and lunch, organized games, arts, songs, and character development. This safe haven developed a complementary after- school program designed to assist parents, churches, and public schools in enhancing the quality of life for children. Community summit to design and implement an economic development strategy and prepare for new economic opportunities. Dyersburg Weed and Seed program officials told us an important goal for their program is to leverage additional resources from non-EOWS sources to become self-sustaining. During the course of our site visit, we identified several partnerships established by the Dyersburg Weed and Seed program to leverage additional resources. These cooperative arrangements involved partners such as the Bureau of Alcohol, Tobacco and Firearms (ATF) and local Dyersburg businesses. Table III.6 illustrates examples of leveraging efforts that were identified through our survey and site visit. Description Site reported that participation with these agencies has enabled more law enforcement coverage with its small police force and resulted in prosecutions and convictions of over 25 major drug dealers in northwest Tennessee. Also was a partner in above investigations. Provides a staff member at no cost to help run safe haven program. Provides a staff member at no cost to administer the Weed and Seed program. Doubled size of bike patrol—now has a two-person bike patrol team in all four Weed and Seed target areas. Residents rented a house to the City of Dyersburg for 10 years at 1 dollar per year plus property tax. House is to be used as a mini- police precinct in target area. Provide in-kind donations of food and other supplies to various Weed and Seed functions, such as picnics and barbecues. SITE’S PERFORMANCE MEASURES Dyersburg does not use site-specific indicators to measure the results of its program. However, in response to our survey, the site coordinator reported that the site used a variety of methods to measure program success, and that evaluation was a regular and ongoing part of the program. First, the local steering committee met monthly to review and the program. Second, the police chief reviewed the program and offered regular input. Third, the site coordinator and safe haven coordinator regularly reviewed activities funded or assisted by the Weed and Seed program to ensure that they were meeting program requirements. While these methods might prove useful to local officials, they do not measure outcomes or results. Philadelphia, PA, was officially recognized as one of the original Weed and Seed sites in 1992. The Philadelphia target area is bounded on the east by Front Street, on the west by Fifth Street, on the north by Westmoreland Street, and on the south by Berks Street. In addition, the target area encompasses the Philadelphia 25th and 26th police districts. The target area has a higher proportion of the population under 18 than any other area of Philadelphia. The most prevalent illegal drugs of choice have been cocaine and heroin, and the continued focus of the Weed and Seed initiative is toward both major traffickers of illegal drugs as well as those engaged in street sales. The continuing goal of this site is to revitalize the neighborhood and provide the opportunity for the residents in the community to live, work, and raise children in a safe and clean environment. Objectives for this site are to (1) control violent and drug-related crime; (2) enhance public safety and security by mobilizing neighborhood residents; (3) create a healthy and supportive environment by preventing and combating crime, drug use, unemployment, illiteracy, and disease; and (4) revitalize the neighborhood. Since fiscal year 1992, the Philadelphia Weed and Seed program has been awarded about $4 million for the program comprising grant and asset forfeiture fund awards. As of December 31, 1998, the Philadelphia Weed and Seed program had used about $3.6 million. Grant awards ranged from a high of about $1.2 million in fiscal year 1992 to a low of about $177,000 in fiscal year 1997. Asset forfeiture funds were awarded in 5 years and ranged from a high of about $288,000 in fiscal year 1994 to a low of about $103,000 in fiscal year 1996. See table III.7 for the funding history of the Philadelphia Weed and Seed site. Table III.7: Philadelphia Weed and Seed Site’s Funding History—FY 1992-99 (1998 Constant Dollars) Award is pending. Philadelphia’s Weed and Seed site activities are focused on strategies to assist children and youths in becoming productive and law-abiding citizens; free them from drug and alcohol abuse; establish safe haven multiservice education centers (four are currently operating) in drug- and crime-free environments; continue Community Resource Centers that provide an array of social services; and conduct pr provide antidrug marches/vigils, neighborhood clean-ups, employment training, community organizing, youth programs, volunteer recruitment, and information and referral. Table III.8 shows examples of activities funded by the Philadelphia Weed and Seed site, listed by program element. Description These organizations are to conduct collaborative investigations among law enforcement agencies. In addition, community residents provide information to the police mobile units as well as provide anonymous information to officers. These organizations participate in and support antidrug marches. These groups provide training and workshops relating to drug and alcohol treatment and prevention. Residents become involved by taking part in the workshops and training provided and accepting referrals for drug rehabilitation programs. Schools, Shalom, Safe Havens, AmeriCorps, DARE programs, etc. Prevention specialists teach conflict resolution in schools. Residents become involved by participating in the programs offered in the schools for their youths and volunteering in the community and safe havens. The goal of this activity is to motivate parents, youths, schools, and businesses to work together toward a clean and viable community. Youth volunteer to participate to take part in area clean ups and attend community service projects to earn community service hours, and residents clean area in front of homes. Philadelphia’s Weed and Seed Program officials told us an important goal for their program is to leverage additional resources from non-EOWS sources to become self-sustaining. During the course of our site visit, we identified several partnerships established by the Philadelphia Weed and Seed program to leverage additional resources. These cooperative arrangements involved partners, such as the Pennsylvania Army National Guard and Villanova University. Table III.9 illustrates examples of leveraging efforts that were identified through our survey and site visit. Description DOJ’s HIDTA assesses the extent of and change in the demographics of drug-using offenders and is to create an integrated and collaborative intelligence center to focus on narcotics trade in the area. This partner provides conflict resolution training, camping trips, and demand reduction programs and assists in coordinating the DEFY program. These organizations provide food, drinks, and snacks to safe havens and after-school programs at no cost. Universities provide volunteers to assist with safe haven activities and other projects, such as smoke detector installations and clean ups. The department provides police officers to patrol the Weed and Seed area on bikes, conduct special investigations, train block captains, etc. SITE’S PERFORMANCE MEASURES In response to our survey, the site coordinator reported that this site uses a variety of methods to measure success in achieving its Weed and Seed program goals and objectives. Methods cited include (1) conducting pretests and posttests for various programs implemented, (2) using sign-in sheets for various activities to monitor trends in community involvement, (3) conducting youth and parent surveys, and (4) using various police statistics to measure the success of operations. In addition, Temple University completed an evaluation of the Philadelphia Weed and Seed project in the fall of 1997, reporting the program’s impact in the community between 1992 and 1997. Since the completion of this evaluation, it has been shared with the Attorney General of the United States and discussed with city officials as well as discussed at Weed and Seed Steering Committee meetings. According to Philadelphia Weed and Seed site officials, they have begun to take action as a result of this evaluation. For example, the Weed and Seed site hosted an 1-day “Getting Back to the Strategy” session in March 1998. The purpose of this session was to bring representatives from all Weed and Seed components together as a group to make the Weed and Seed target area a clean and safe place to live and raise children. San Diego, CA, was officially recognized as a Weed and Seed site in 1992. The Weed and Seed target area in San Diego includes three of the six neighborhoods that comprise the central sector of the southeast San Diego area. San Diego's target area has a total population of 22,137 (8,494 youths 17 years or younger; 13,643 adults 18 years and older). The total number of households is about 5,000, and the ethnic composition is approximately 54 percent African American, 33 percent Latino, and 13 percent other. The median family income is $18,062, and about 39 percent of the total population is below poverty level. During our visit to the San Diego Weed and Seed site, we and the EOWS program monitor who accompanied us identified a number of problems affecting the site’s successful implementation of the Weed and Seed program. One of the problems we identified was the lack of direct U.S. Attorney and resident involvement in the steering committee. EOWS requires that the U.S. Attorney be involved with the steering committee and that residents be actively involved. On the basis of our observations during our site visit and the report from the EOWS program monitor, it appeared that the residents in the target area and the city agencies in the community did not always agree on how the Weed and Seed program should be implemented in San Diego. The site coordinator told us there was a lack of communication among the U.S. Attorney’s office, the Mayor’s office, and community residents on how Weed and Seed funds should be allocated and what activities and services should be provided to the target area. During the course of our review, EOWS decided not to qualify San Diego for fiscal year 1999 funding on the basis of the above observations and their own analysis of the San Diego Weed and Seed site. As a result, the San Diego City officials and the U.S. Attorney’s office have renewed their commitment to the San Diego Weed and Seed site. They agreed to work together to restructure the existing Executive Steering Committee and provide the site with improved direction to ensure its future success in implementing the Weed and Seed program in San Diego. Since fiscal year 1992, the San Diego Weed and Seed program has been awarded about $3.5 million for the program comprised of grant and asset forfeiture funds. As of December 31, 1998, the San Diego Weed and Seed program had used about $2.9 million. Grant awards ranged from a high of about $691,000 in fiscal year 1992 to a low of about $51,000 in fiscal year 1997. Asset forfeiture funds were awarded in 3 years and ranged from a high of about $268,000 in fiscal year 1994 to a low of about $103,000 fiscal year 1996. See table III.10 for the funding history of the San Diego Weed and Seed site. Table III.10: San Diego Weed and Seed Site’s Funding History—FY 1992-98 (1998 Constant Dollars) San Diego provides a variety of Weed and Seed activities, such as Neighborhood Policing Teams, which conduct bike and foot patrols of the community, and a safe haven, which teaches children about computers. Table III.11 shows other examples of the types of activities funded by the San Diego Weed and Seed program, listed by program element. Description The San Diego Police Department coordinates and works with the task forces to arrest and adjudicate violent criminal offenders for activities such as gang involvement, drug trafficking, and car theft in the Weed and Seed target area. Partner(s) San Diego Police Department, INS, ATF, FBI, DEA, California Department of Corrections, San Diego District Attorney, San Diego County Probation, and San Diego City Attorney San Diego Police Department Children’s/Youth Choir, Inc. The NPT works with local residents to address community concerns, including drug and gang activity, public intoxication, code compliance, properties in need of boarding, securing, and other nuisance and crime-related activities. The NPT uses foot and bike patrols and substations as a means of monitoring the target area. A course for children in grades 6-12 designed to teach them about the different parts and functions of computers. Children learn how to assemble and operate a computer, including installing and using software. A course for children ages 9-13 designed to provide them with art instruction, such as basic drawing techniques, and develop art work to be displayed at a “Community Pride Day” in the Weed and Seed target area. A community pride event intended to bring target area residents together in a celebration of diversity, unity, and community pride. An example of an event is to hold a festival at one of the target area parks providing entertainment, food, fun and games, music, and other types of entertainment. An important stated goal for San Diego’s Weed and Seed program is to leverage additional resources from non-EOWS sources to become self- sustaining. During the course of our site visit, we identified several partnerships established by the San Diego Weed and Seed program to leverage additional resources. These cooperative arrangements involved partners such as the San Diego Police Department and the San Diego public schools. Table III.12 illustrates examples of leveraging efforts that were identified through our survey and site visit. Description The San Diego Police Department coordinates as well as participates in task force operations not funded by the Weed and Seed Program. A variety of programs (computer assembly course, arts and culture class, etc.) and services (youth mentoring, job assistance) are offered through partnerships with a number of agencies at cost or below market cost to the Weed and Seed program. The police department deploys paid staff, volunteers, and patrol officers to the target area. The city provides a satellite office, for use by the police department, dedicated to the Weed and Seed target area. The San Diego City Parks and Recreation service offers a rent- free facility to the Weed and Seed program for use as a safe haven. In addition, the city offers other administrative services with minimal overhead costs. Facilities are provided rent-free for a number of Weed and Seed activities. SITE’S PERFORMANCE MEASURES In response to our survey, the Weed and Seed site coordinator reported that Weed and Seed efforts in the San Diego target area were evaluated through a number of different methods. Evaluations of weeding efforts included (1) performing a comparative analysis of crime statistics compiled for the target area; (2) tracking police actions established by residents, community organizations, and businesses; and (3) maintaining statistics on community contacts made and events attended by police officers. For the seeding efforts, these methods included (1) requiring monthly activity reports and conducting periodic site visits of all Weed and Seed programs in the target area; (2) checking programs’ compliance with the contracted scope(s) of services, which are to be based on Weed and Seed programs’ goals and objectives; (3) tracking the number of participants in the programs; (4) evaluating the quality and/or duration of services provided to participants; and (5) evaluating program participant service outcomes and their evaluations of the programs. While these measures might be useful in better understanding the activities funded by the San Diego Weed and Seed program, they primarily measure the level of activities, not program results. Further, while the analysis of crime statistics appears to be more outcome oriented, it is difficult to determine a direct link between a reduction in crime rates and Weed and Seed activities. Woburn, MA, has been officially recognized as a Weed and Seed site since 1996. The target area is made up of the downtown area of Woburn and was selected due to the high crime rate and drug sales and the high rate of public housing developments and publicly assisted housing. During the course of our review, EOWS decided not to qualify Woburn for fiscal year 1999 funding. According to EOWS, Woburn had not submitted the required quarterly financial reports and semiannual progress reports that are required by its grant award. However, Woburn would be eligible to be qualified for grant funds in fiscal year 2000 as long as the requirements of its previous awards are met. The Woburn Weed and Seed program was awarded about $305,000 in grant fund awards for the program for fiscal years 1996 and 1997. The awards were about $177,000 in fiscal year 1997 and about $129,000 in fiscal year 1996. As of December 31, 1998, the Woburn Weed and Seed program had used about $213,000. The Woburn Weed and Seed site was awarded $50,000 in asset forfeiture funds in fiscal year 1996. However, in fiscal year 1999, EOWS deobligated these funds since the Woburn Weed and Seed site was unable to use these funds for a law enforcement operation. See table III.13 for funding history of the Woburn Weed and Seed site. Table III.13: Woburn Weed and Seed Site’s Funding History—FY 1996-98 (1998 Constant Dollars) The Woburn Weed and Seed site was awarded $50,000 in asset forfeiture funds. However, in fiscal year 1999, EOWS deobligated these funds since the Woburn Weed and Seed site was unable to use these funds for a law enforcement operation. Woburn provides a variety of Weed and Seed activities, such as a safe haven, which includes helping children with homework assignments, and a Job Links career enhancement program, which provides job readiness training for adults. Table III.14 shows other examples of the types of activities funded by the Woburn Weed and Seed program, listed by program element. Description A coordinated operation conducted by the Woburn Police Department, NEMLEC, and DEA. Funds are to be used for police overtime. A partnership between community police officers and residents to reduce crime and fear of crime through enforcement and community problem solving, using problem-oriented policing and empowering residents to create a safe neighborhood for themselves. Funds are to be used for police overtime. After-school educational/recreational program run in the housing developments for children ages 5-10. Focus is on developing reading and social interaction skills and alcohol/drug/safety education. Assists youths with homework assignments, classroom difficulties, and problems associated with language barriers. Other components include drama, art, and language clubs and English as a Second Language program for parents. Assists community professionals and community police officers in tracking high-risk youths ages 12-17. Youth tracker also tracks youth crime, truancy, and youths in need of assistance and support. Provides résumé writing, career counseling, interview skills, and job readiness training for adults. An important stated goal for Woburn’s Weed and Seed program is to leverage additional resources from non-EOWS sources to become self- sustaining. During the course of our site visit, we identified several partnerships established by the Woburn Weed and Seed program to leverage additional resources. These cooperative arrangements involved partners such as the Woburn Housing Authority and the Boys and Girls Club. Table III.15 illustrates examples of leveraging efforts identified through our survey and site visit. Description Cooperative work arrangement with the state to conduct an evaluation of Woburn’s Weed and Seed site. Using state funds, the city hired a substance abuse counselor to act as the liaison for drug prevention efforts between the city and other entities. This position was created as a direct result of Weed and Seed efforts. Provides assistance in administering the Weed and Seed grant and provides space for a variety of Weed and Seed activities. Provides space to house Weed and Seed programs and allows their vehicles to be used for Weed and Seed activities at no charge. Provide staff and facilities for Weed and Seed-sponsored activities. SITE’S PERFORMANCE MEASURES In response to our survey and our site visit, the Weed and Seed site coordinator reported that the Weed and Seed efforts in the Woburn target area were evaluated through a number of different methods. The indicators used to measure the success of law enforcement efforts included tracking (1) the number and types of crime within the target area, (2) the number of drug arrests, and (3) the number of drug cases that have been started in the target area. For the community-policing element, the indicators used included monitoring the information flow between Community Oriented Police officers and narcotics officers. For the prevention, intervention, and treatment element, the indicators used included tracking the attendance and observing the activities at the various Weed and Seed programs. As for the neighborhood revitalization element, the indicators used included tracking the number of jobs that were found by participants in the Weed and Seed program and calculating the increased economic activity within the target area as a result of the Weed and Seed effort. While these measures might be useful in better understanding the activities funded by the Woburn Weed and Seed program, they primarily measure the level of activities, not program results. Further, while the analysis of crime statistics and tracking the number of jobs found by Weed and Seed program participants appear to be more outcome oriented, it is difficult to determine a direct link between these indicators Weed and Seed activities. The following are GAO’s comments on the Department of Justice letter dated July 1, 1999. 1. DOJ suggested that (1) our report title should be changed to reflect our mandate to review the efficiency and effectiveness of the Weed and Seed Program and (2) some of our report captions should be modified. We believe our report title and captions better convey the message of our report, therefore, we made no modifications. 2. DOJ stated that the Grant Manager’s Memoranda outline the basis and rationale for funding decisions. Our review of the Grant Manager’s Memoranda showed that they did not provide a basis and rationale for funding decisions but rather provided a project overview, including purpose, goals and objectives, strategy, and project management. Further, EOWS management officials told us the narrative on this form is the same for all grantees; therefore, we do not believe these memoranda communicate the basis and rationale for qualifying new and existing sites for funding. 3. DOJ stated that we are suggesting that it routinely perform impact assessments of program components. We are not suggesting that EOWS routinely perform impact assessments. Our statement is meant as an example of a possible outcome measure. 4. DOJ stated that our report did not appropriately highlight positive program results. However, in the results in brief section we note that selected sites had taken actions toward self-sustainment as well as highlight the satisfaction that most local officials had with the activities funded by Weed and Seed. These results are discussed in greater detail in the body of this report. In addition, our survey results, in their entirety, are included in appendix II of the report. 5. DOJ requested that the final report be revised to reflect the controls that for years have been in place to document program management and funding decisions. We did not make this change for the reasons discussed in the agency comments section of this report. Michelle A. Sager The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the effectiveness of the Department of Justice's (DOJ) Weed and Seed Program, focusing on how: (1) the program is managed by DOJ's Executive Office for Weed and Seed (EOWS); (2) EOWS monitors local Weed and Seed sites to ensure that grant requirements are met; (3) EOWS determines when sites have become self-sustaining; and (4) EOWS and selected sites are measuring program results. GAO noted that: (1) EOWS has not established an adequate internal control requiring that significant program management decisions be documented; (2) without this control, EOWS management has not always fully documented EOWS decisions; (3) for example, in reviewing 12 of the 70 fiscal year (FY) 1999 new site qualification funding decisions, GAO found that for 5 of these 12 decisions, documentation was insufficient for GAO to determine how inconsistencies among external consultants and grant monitor recommendations and EOWS management decisions were reconciled; (4) in FY 1999, EOWS made decisions to qualify 164 of the existing 177 sites for continued funding, although in some cases, EOWS grant monitors recommended against additional funding; (5) however, available documentation was insufficient for GAO to determine the basis and rationale for EOWS' deciding to qualify these sites for continued funding; (6) for the remaining 13 sites that EOWS decided not to qualify for continued funding, documentation was sufficient to determine the basis and rationale for these decisions; (7) EOWS also did not always ensure that local Weed and Seed sites met critical grant requirements; (8) progress reports are an important tool to help EOWS management and grant monitors determine how sites are meeting program objectives and to assist in making future grant qualification decisions; (9) EOWS has not developed criteria to determine when sites have become self-sustaining and when to reduce or withdraw Weed and Seed funds, even though the goal of sites' becoming self-sustaining is central to the program; (10) while GAO identified actions that selected sites had taken toward self-sustainment, at the time of GAO's review, no site's funding had been reduced or withdrawn as a result of its efforts to become self-sustaining during the 9 years of the program's existence; (11) EOWS' performance indicators generally did not measure program results; (12) while GAO's review was in progress, EOWS changed some of its performance indicators in an attempt to better measure how well sites were meeting program objectives; (13) however, the revised indicators still primarily tracked program activity rather than results; (14) despite the general lack of performance indicators, most local officials with whom GAO spoke commented favorably on the activities funded by the local Weed and Seed sites; and (15) they believed that a key ingredient to the Weed and Seed Program's success was the commitment of the mayors' and U.S. Attorneys' offices and civic and business leaders.
The Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic continues to spread rapidly in the developing world, where more than 90 percent of the 30 million people living with the HIV infection live (see fig. 1.1). Moreover, the UNAIDS Secretariat recently reported that more than 90 percent of the 5.8 million new infections in 1997 (up from 3.1 million in 1996) were in developing countries (see fig. 1.2). Sub-Saharan Africa has the worst infection rate, accounting for 3.4 million new infections in 1997. In that region, 7.4 percent of people aged 15 to 49 are infected. Estimates for South and South-East Asia indicate the disease is also rapidly spreading in that region, with 6.4 million currently living with HIV/AIDS and 1.3 million new infections in 1997. Established market economies include North America, Western Europe, Australia, and New Zealand. In many developing countries, HIV/AIDS has begun to erode decades of gains in health, child survival, life expectancy, education, and economic development. For example, U.S. Bureau of the Census projections for Zambia indicate that by 2010, AIDS may increase infant mortality rates nearly 60 percent higher than would have been expected without the disease. Similarly, projections for Zimbabwe indicate that by 2010, life expectancy will decline from 70 years to less than 35 years as a result of AIDS and in Uganda from 54.5 years to 35.5 years. Since the start of the epidemic, more than 8 million children have lost either their mother or their father because of AIDS. AIDS’ impact on families and public health systems is weakening economies as people in their prime working years are afflicted by the disease and governments and families divert scarce resources to care for them for extended periods of time. The donor community is spending approximately $250 million a year to address the HIV/AIDS epidemic in the developing world. The United States is the largest single donor, contributing $117 million in 1997 through the U.S. Agency for International Development (USAID) that includes specific support for UNAIDS. However, HIV/AIDS poses serious challenges to the world community because of the extent of the epidemic and the cost and difficulty of changing deeply rooted traditions and behaviors that contribute to the spread of the disease: according to a study commissioned by WHO, between $1.5 billion and $2.9 billion would be needed from donors and affected countries annually to implement behavioral and blood safety strategies to prevent HIV/AIDS in developing countries. Moreover, other epidemics had their roots essentially in medical problems and could be addressed through biomedical remedies from public health systems. However, absent a vaccination or cure, slowing the reach of the virus must be accomplished by addressing such fundamental cultural and social traditions as the role of women, sexual practices, and inheritance laws. For example, according to USAID, tradition and laws in Kenya do not allow women to inherit property. Without skills or experience in earning money, if their husbands die, women often have no other recourse than to engage in prostitution. USAID and the United Nations first began to address the epidemic in the mid-1980s. While both USAID and the United Nations seek to reduce the spread of the epidemic, they have somewhat different yet mutually supporting roles, objectives, and coverage. As a bilateral agency, USAID works in partnership with governments, other donors, and private organizations to support research and implement HIV/AIDS interventions in the 28 countries where it has major programs. The U.N.’s role is in advocating, mobilizing, and coordinating the international response worldwide in addition to managing HIV/AIDS activities in 152 countries. Since it began its HIV/AIDS assistance program in 1986, USAID’s goal has been to reduce the incidence of new HIV/AIDS infections. In the 1980s, very little was known about the epidemic or how to fight it. As a result, USAID focused its initial efforts on understanding the causes and extent of the epidemic and on identifying ways to prevent its spread. At the direction of Congress in 1986, USAID supported WHO’s Global Program on AIDS (GPA), and it also paid for public and private research efforts and activities in the field. These field activities included operations research on interventions that prevent the spread of HIV/AIDS; surveillance and analysis of the incidence, spread, and impact of the disease; and assistance in countries’ design and implementation activities. During this learning phase, USAID reported that it was the first donor to introduce HIV/AIDS prevention activities in most countries. Further, by providing short-term technical assistance to USAID missions in more than 74 countries and funding small-scale projects to prevent new infections, it educated USAID staff and host country officials about the epidemic. By the early 1990s, USAID became more knowledgeable about the disease, and Congress increased funding for HIV/AIDS (see fig.1.3). USAID designed a strategy to focus on country-level projects that could have a measurable impact on the epidemic. From 1991 to 1997, USAID supported the AIDS Control and Prevention (AIDSCAP) project. By far the most ambitious international HIV/AIDS prevention effort ever undertaken, AIDSCAP was a worldwide program intended to help USAID overseas missions design and implement HIV/AIDS prevention projects. AIDSCAP directly managed comprehensive projects in some countries and supplied technical assistance to USAID missions as requested. USAID relied primarily on private voluntary organizations (PVO) and nongovernmental organizations (NGO) to implement its HIV/AIDS programs, both at its Washington, D.C., headquarters and in the field. By 1997, USAID had incorporated the goal of reducing HIV/AIDS transmission as one of five objectives in its global health improvement portfolio and had delineated performance goals and indicators to measure its progress. Agency funding for HIV/AIDS activities had increased (to about $125 million in 1993, leveling off at about $117 million a year), and USAID shifted more resources to missions to develop their own comprehensive programs. Headquarters’ efforts became focused on providing technical assistance as needed and supporting research. In fiscal year 1997, the majority of USAID’s funds supported project activities at the country level—with major programs in 28 countries ($81 million), followed by centrally managed technical assistance and research support ($20 million), and grants to UNAIDS ($16 million). In 1997, USAID initiated three cooperative agreements with several PVOs and has a fourth in process. These agreements provide up to $290 million over 5 years for HIV/AIDS activities—about $40 million to conduct operations research and field testing to refine and develop best practices for prevention and care; up to $150 million for technical assistance, as requested by missions; up to $75 million to implement programs that advertise and promote the appeal, availability, and use of condoms, as requested by missions; and about $25 million to provide program design/monitoring and evaluation, lessons learned, and information dissemination services. WHO first began collecting and publishing information on HIV/AIDS in 1981. The U.N. General Assembly directed WHO to develop and coordinate the agency’s first program to respond to HIV/AIDS by creating the Special Program on AIDS in 1987, subsequently renamed GPA in 1988. GPA’s mission was to strengthen the capacity of governments to respond to the epidemic and to help establish national AIDS programs. WHO provided technical and financial support, ranging from $100,000 to $400,000 to initiate national programs. WHO is credited with making major contributions to nations’ efforts against the epidemic, including protecting blood supply systems, strengthening national behavior research, and improving disease surveillance. In the early 1990s, U.N. officials and donors increasingly recognized the need for a multisectoral response to the complex challenges of the HIV/AIDS epidemic—including the social, economic, and development issues affecting the spread of the virus. They realized that WHO’s medically based response was insufficient. They were concerned that countries were dependent on GPA for operational support and, as a result, were not devoting enough of their own resources to the effort. Also, they expressed the need for better coordination and delineation of roles and responsibilities among various U.N. agencies. To address these concerns, on January 1, 1996, the United Nations replaced GPA with the Joint United Nations Programme on HIV/AIDS (UNAIDS). The 1996-97 biennial budget for the UNAIDS Secretariat was $120 million, of which the United States contributed $34 million, or about 28 percent. The U.N.’s goal in creating UNAIDS was to lead a broad-based, expanded, worldwide effort to prevent the transmission of HIV/AIDS. UNAIDS is composed of a Secretariat and six U.N. agency cosponsors: the United Nations Children’s Fund (UNICEF); the United Nations Development Program (UNDP); the United Nations Population Fund (UNFPA); the United Nations Educational, Scientific, and Cultural Organization (UNESCO); WHO; and the World Bank. Each cosponsor was expected to expand its financial support for HIV/AIDS efforts, to try to mobilize resources for HIV/AIDS in affected countries, and to coordinate with other cosponsor agencies at the country level. Unlike WHO’s role in GPA, the UNAIDS Secretariat was not expected to provide significant financial support and technical advisers to countries. Instead, it was established primarily as a coordinating body and was expected to advocate increased political and financial support for HIV/AIDS activities, to devise a framework for performance measures to be used in managing HIV/AIDS activities, to provide technical support and best practice information to help develop and carry out national HIV/AIDS strategies, and to organize entities at the country level—called “theme groups”—as the forum for coordinating U.N. efforts. Theme groups were to be composed of field representatives of U.N. cosponsor agencies. The groups were expected to work together to assist national governments develop and implement HIV/AIDS programs. As of May 1998, 127 HIV/AIDS theme groups were operating in 152 countries. At the request of the Chairman of the House International Relations Committee and Representative Jim McDermott, we reviewed the contributions made by USAID and the United Nations in designing and implementing programs to slow the spread of HIV/AIDS. Specifically, we examined (1) the contributions USAID has made to the global effort to prevent HIV/AIDS, and the methods USAID uses to provide financial oversight for its HIV/AIDS prevention activities; and (2) the extent to which UNAIDS has met its goal of leading an expanded and broad-based, worldwide response to the HIV/AIDS pandemic. We did not evaluate the program’s impact on the HIV/AIDS epidemic or whether U.S. support for the program should continue. To examine USAID’s contributions to the global effort to prevent HIV/AIDS, we reviewed expert studies on the disease and interventions, and reviewed internal and external USAID project evaluations from 1995 to 1998. We compared the reported data with evidence we gathered in the field. To view USAID efforts in the field, we chose countries in different parts of the world with both emerging and advanced epidemics. In countries with emerging epidemics, HIV/AIDS is primarily concentrated in high-risk groups, and in countries with advanced epidemics it has spread to the general population. In Latin America and the Caribbean, we visited the Dominican Republic and Honduras, both of which have emerging epidemics. USAID considers Honduras the epicenter of the epidemic in Central America because it has the highest concentration of HIV-positive people in its high-risk groups. In Asia we visited India, which has more HIV-positive people than any other country in the world, although still largely concentrated in high-risk groups, and the Philippines, which has an emerging epidemic. In Africa, we visited Zambia, which has an advanced epidemic, with about 20 percent of the general population infected with HIV. In the countries we visited, we reviewed internal USAID mission project papers and 1997 mission progress reports and observed USAID projects. To gather evidence of the effectiveness of USAID’s country-level projects, we reviewed behavior surveys and available surveillance data; and met with mission directors, population, health, and nutrition officers, HIV/AIDS project officers, staff from PVOs and NGOs implementing projects, host government officials, project participants, and recipients of services, including commercial sex workers, men who have sex with men, and youth, volunteers, a condom social marketing organization, and private sector representatives involved in HIV/AIDS activities, and people living with HIV/AIDS. We visited project sites to see how interventions were implemented and to discuss the views of the recipients of USAID activities. To examine the level of financial oversight USAID exercised over program activities, we reviewed Office of Management and Budget (OMB) and USAID guidance relating to the use of cooperative agreements and contracts. We reviewed several relevant contracts, cooperative agreements, and associated procurement records relating to active HIV/AIDS projects to determine whether they provided for appropriate oversight as required by federal procurement regulations and guidance from OMB and USAID. We discussed financial oversight responsibilities with USAID project managers, procurement staff, and financial management officers in headquarters and in the five countries we visited. We also reviewed the financial record-keeping and reporting requirements that USAID placed on recipients of USAID funds. In addition, we reviewed quarterly expenditure reports from PVOs from 1994 through 1997 and discussed financial reporting and selected management and accounting policies with PVO staff to determine their compliance with OMB and agency provisions. We reviewed USAID’s administrative approval and payment procedures and studied recent USAID assessments of its financial and operational oversight responsibilities with PVOs. We reviewed pre-award evaluations for four headquarters-led projects and two mission-led projects and reviewed audit reports related to the centrally managed projects in the five countries we visited. We also met with Office of the USAID Inspector General (OIG) staff to discuss their reviews of these reports and independent audit assessments. As an agency of the U.S. government, we have no direct authority to review the operations of multilateral organizations such as the United Nations. However, throughout this review we obtained broad access to agency staff members and official information at the headquarters, regional, and country level. To determine whether UNAIDS has achieved its goal to lead an expanded and broad-based, worldwide response to the HIV/AIDS epidemic, we measured progress against criteria set forth in the U.N.’s Economic and Social Council resolution endorsing the creation of a Joint United Nations Program on HIV/AIDS, the memorandum of understanding signed by the six cosponsoring agencies, and the strategic plans of the UNAIDS Secretariat and the cosponsoring agencies. We conducted audit work at the UNAIDS Secretariat in Geneva, Switzerland, and at the headquarters of each of the six cosponsor agencies, including the Washington headquarters of the Pan-American Health Organization. At the UNAIDS Secretariat, we interviewed officials from the Office of the Executive Director and the Departments of External Relations; Policy, Strategy and Research; and Country Support. We obtained and analyzed staffing and budget documents of the Secretariat and analyzed the scope of work for each department. We also reviewed several of the “best practices” documents produced by the Department of Policy, Strategy and Research and discussed the best practices outputs with knowledgeable officials from USAID. We interviewed officials from the cosponsor agencies charged with directing their agencies’ HIV/AIDS activities and with officials from other offices and departments of cosponsor agencies that are relevant to addressing HIV/AIDS—such as WHO’s Global Tuberculosis Program. To determine U.N. spending on HIV/AIDS, we obtained expenditure data for 1992 to 1997 directly from the UNAIDS Secretariat and from the headquarters offices of the six cosponsor agencies. We also obtained agency expenditure data reported by the UNAIDS Program Coordinating Board. We did not verify the data reported by or provided directly to us from the agencies and the UNAIDS Secretariat. In attempting to determine the level of spending by the major donors and developing nations, we reviewed preliminary data from a study on global HIV/AIDS expenditures conducted by the UNAIDS Secretariat and Harvard University’s School of Public Health. We also met with government officials to discuss the level and type of financial support for HIV/AIDS activities and to discuss barriers to increasing resources to fight the disease. To determine the level of activity by the private sector in support of HIV/AIDS, we interviewed host government, U.N., USAID, and NGO officials in our case study countries and analyzed reports prepared by the UNAIDS Secretariat. To gain an understanding of UNAIDS’ progress in addressing the HIV/AIDS pandemic over time and issues surrounding the transition from WHO’s Global Program on AIDS to the current Joint Program on HIV/AIDS, we interviewed a U.N. diplomat instrumental in the negotiations establishing UNAIDS and knowledgeable officials from U.N. agencies, USAID, the Department of State, and the U.S. Centers for Disease Control. To determine how well cosponsor agencies work together and the types of interventions provided, we reviewed surveys of theme group participants provided by the UNAIDS Secretariat and conducted case studies of U.N. programs in the five countries we visited. While in these countries, we interviewed officials and obtained strategic planning documents from most of the U.N. cosponsor agencies active in the country; host government officials, including officials from the national AIDS program and the Ministries of Health; USAID; other bilateral donor programs; international and local PVOs and NGOs; and local activists and people living with HIV/AIDS. We also observed firsthand the intervention activities of the U.N. agencies. We conducted our work from July 1997 through June 1998 in accordance with generally accepted government auditing standards. USAID has elevated HIV/AIDS to an agency priority and developed a targeted strategy to achieve its objective of reducing the incidence of HIV/AIDS. USAID’s main contributions have been (1) support for research that helped to identify interventions ultimately proven in clinical trials to prevent HIV transmission; and (2) implementation of projects at the country level that increased awareness of the disease, reduced risky behaviors, and increased access to treatment of sexually transmitted diseases (STD) and to condoms, which have helped slow the spread of the disease in target groups. USAID relies primarily on cooperative agreements with PVOs to implement its programs, both at headquarters and in the field. Under the terms of these agreements, the primary responsibility for financial oversight rests with recipients. USAID’s oversight consists of pre-award evaluations, quarterly expenditure reports, and annual external audits. OIG officials said that there were no indications from audits conducted that systemic problems existed. USAID has funded public and private research efforts to identify interventions that became the principal tools used in the global response to HIV/AIDS. When USAID began its program in the mid-1980s, medical experts recognized that the key to slowing HIV transmission was behavior change and that traditional medical responses were not sufficient. However, research was only beginning to identify effective interventions. USAID capitalized on expertise developed in its health and child survival programs and built upon the research conducted by WHO to test and implement interventions targeted at HIV/AIDS prevention. With support from USAID and other donors, experts identified interventions that, when implemented in a culturally appropriate manner and combined in a coordinated effort, have been proven through clinical trials and longitudinal studies to have an impact on the spread of AIDS. They are information, education, and counseling to raise awareness of the threat of HIV/AIDS in an effort to promote positive behavior changes such as abstinence or reduction in the number of sexual partners, and safer sex practices; treatment of STDs which, if left untreated, can facilitate transmission of the HIV infection; and • promotion of increased condom use through condom “social marketing” to prevent transmission of the virus. The first intervention, attempting to change risky behavior through increased awareness, has posed a particular challenge to HIV/AIDS experts. The behaviors that result in transmission of the virus are often deeply rooted in social and cultural traditions, and people often find them difficult to discuss. For example, in some African countries, polygamous unions may force “junior wives” into prostitution to earn money. In addition, research on ways to promote change in sexual behaviors is not advanced. Even when effective approaches have been identified, they may not always be transferable from one cultural environment to another. For example, USAID’s largest HIV/AIDS program—AIDSCAP—noted the difficulty in encouraging Rwandan refugees to take individual action to change their risky behavior when they had no control over the rest of their life. USAID supported a number of efforts to identify approaches to achieve behavioral change through clinical trials of HIV prevention counseling and testing in Africa, Asia, Latin America, and the Caribbean. For example, AIDSCAP worked with the United Nations and research institutions from Kenya, Tanzania, Trinidad, and the United States to assess the efficacy of efforts intended to promote voluntary HIV counseling and testing. In 1997, USAID signed a cooperative agreement to support a 5-year, $40-million program for operations research and field testing of interventions to further refine and develop best practices for prevention and care activities. In the five countries we visited, USAID projects used creative approaches to increase HIV/AIDS awareness and promote behavior change. For example, in Honduras, USAID—in conjunction with UNICEF—supported youth theater groups to develop plays with HIV/AIDS-related themes. To reach out-of-school youth, USAID supported pregame mock soccer matches, where HIV Virus and Death teams battled Abstinence and Condom teams. Also, in the Indian state of Tamil Nadu, USAID targeted education efforts at truck drivers, who had been identified as key transmitters of the virus. On a field trip, we saw roadside meetings between counselors and truckers to discuss the risks of HIV transmission and demonstrate how to use condoms correctly. USAID was among the pioneers in funding research to determine whether having an STD increases the risk of transmitting HIV. This research concluded that STDs, especially those that cause lesions, provide a pathway for the HIV virus to enter the body and that STDs were highly prevalent in many of the populations most affected with HIV/AIDS. As early as 1991, USAID reported that the risk of HIV transmission significantly increased when other STDs are present and worked with WHO to develop standardized treatments. The link between STDs and HIV transmission was eventually confirmed by the results of a 3-year trial in Tanzania. The trial concluded that increased STD treatment reduced HIV incidence by about 40 percent. Improving STD treatment capacity was a component of USAID’s AIDS prevention strategy in every country we visited. In Honduras, USAID supported the expansion of health clinic services to include treatment of STDs. Further, USAID’s AIDSCAP program supported STD research in the Philippines, trained health care providers in STD treatment in India, and developed national guidelines for improved STD care in 18 other countries. Another intervention developed and tested with USAID’s support is condom social marketing, which relies on increasing the availability, attractiveness, and demand for condoms among target populations through advertising and public promotions. USAID projects encourage production and marketing of condoms by the private sector to ensure the availability of affordable and quality condoms when and where people need them. The development of this marketing strategy was based on USAID-sponsored research and experience that showed that people are more likely to use a condom if they were affordable, high quality, and available when and where needed. World Bank data demonstrated that condom sales increased dramatically in many developing countries after condom social marketing programs were introduced. For example, condom sales in Brazil rose from 406,000 in 1991 to nearly 27 million in 1996 after condom social marketing programs began. USAID, as well as UNAIDS, World Bank, and private research institutions, have noted the difficulty in determining the direct impact of interventions on the incidence of AIDS. The interventions used by USAID have been proven to affect HIV/AIDS incidence because they result in behavior changes that reduce the risk of disease transmission. However, it is difficult to determine the link between a particular activity or program and reductions in the incidence of HIV/AIDS because of the long incubation period for the disease; a person can be infected as a result of an activity from 7 to 10 years previously. USAID measures the impact of its HIV/AIDS activities in its target groups by conducting blood tests for HIV incidence but also uses proxy indicators such as behavioral change and condom sales. Public health experts agree that these proxy indicators are reasonable indicators of changes in HIV incidence. Despite the limitations in evaluating impact, USAID can demonstrate that it has contributed to the fight against HIV/AIDS through its interventions in the countries where it had programs. For the global project—AIDSCAP—and each mission, USAID established goals and identified target groups based on country needs. To assess the countries’ progress toward achieving these goals, USAID conducted internal and external evaluations and behavioral surveys, and tested people in the target groups for HIV. Data show that USAID projects increased knowledge about HIV/AIDS and how to prevent it, changed risky behaviors, and increased access to STD treatment and condoms, thus helping to slow the spread of AIDS in target groups. From 1991 to 1997, the goal of USAID’s $200-million global project, AIDSCAP, was to support research, help missions develop and implement HIV/AIDS programs and to provide technical assistance for mission-led programs. AIDSCAP devised and carried out AIDS prevention programs in 18 countries and supplied technical assistance to 25 other USAID programs. Using a variety of evaluation instruments such as behavioral surveys and blood testing for the HIV virus, USAID evaluated AIDSCAP’s projects and concluded that AIDSCAP’s activities increased knowledge about HIV and effected a change in attitude toward those affected by the virus. In target groups in many of the countries, data indicate that AIDSCAP activities resulted in altered perceptions of individual risk and less risky sexual behaviors. For example, in the Ivory Coast, a USAID survey of 1,000 15- to 25-year olds in 30 targeted villages indicated that 47 percent had reduced their number of sexual partners in response to AIDSCAP activities. USAID also reported that more than 275 million condoms were distributed with USAID support in 1996, or approximately 27 percent of all socially marketed condoms in developing countries. AIDSCAP implemented HIV/AIDS programs in the Dominican Republic and Honduras. Our observations on these two efforts follow. The goal of USAID’s AIDSCAP project in the Dominican Republic was to improve knowledge and access to AIDS prevention practices and services in target groups. Our review of behavioral and HIV surveillance data and our interviews with participants indicate that USAID had an impact in both areas. USAID reported that the percentage of young people who knew of at least two preventive measures increased from 45 percent to 100 percent between 1993 and 1996 after receiving AIDSCAP-developed information on the disease. In addition, the use of condoms by commercial sex workers rose from 65 percent in 1992 to 98 percent in 1996; commercial sex workers with whom we met said they always tried to convince their clients to use condoms. Moreover, USAID helped develop a low-cost condom with a multinational pharmaceutical company, which significantly increased the availability of condoms. USAID also obtained free air time on radio stations to broadcast prevention messages. Data on HIV incidence among commercial sex workers at one clinic targeted by AIDSCAP projects indicated that the percentage of HIV-positive workers who came to a USAID-supported clinic declined from 5.8 percent in 1995 to 3.3 percent in 1996. Moreover, surveys undertaken upon completion of the project showed significant declines in risky behavior in targeted groups. For example, the number of youth who said they were sexually active declined from 73 percent in 1992 to 30 percent in 1996. In Honduras, AIDSCAP designed and implemented a program to support the government’s HIV/AIDS control program and to increase the use of STD/AIDS prevention practices among high-risk groups, including increasing access to STD treatment. The goal of the program was to reduce the incidence of HIV/AIDS in specific regions of the country. However, because of difficulties getting started, the project operated for only 2 years. According to USAID officials in Honduras, they began negotiating with AIDSCAP in 1993 to develop a program, but that AIDSCAP’s proposals did not adequately emphasize participation by the government or involvement by local NGOs. USAID did not reach agreement with AIDSCAP until 1995, 2 years before it was scheduled to end. USAID evaluations and discussions with NGO personnel indicated that the project had successes but should have done more to prepare their local country office to assume the financial and managerial responsibilities for the projects in an effort to ensure sustainability. In 1997, after the AIDSCAP office was converted to a locally registered NGO, the mission awarded the new NGO a USAID grant to continue prevention efforts. However, because of its lack of financial and managerial capacity, it was required to take corrective actions before the new project could begin. Data are not yet available to determine the impact of AIDSCAP on the incidence of HIV/AIDS in Honduras. Early in the AIDSCAP project, USAID conducted a behavioral survey to gather baseline data on risky behaviors. However, because the project was only operating for 2 years, USAID will not follow up with a survey to measure behavioral change as a result of its activities until 1999. The mission used other indicators to measure the success of the project. It reported that it had exceeded its goal in increasing the numbers of condoms distributed and that it had expanded access to STD treatment. USAID upgraded a number of Ministry of Health-run health clinics to increase access to STD prevention and treatment. Government officials informed us that the number of women seeking STD treatment had risen since completion of a USAID-funded STD clinic in a poor area of the capital city, Tegucigalpa. Recipients of USAID-supported activities also told us that risky behavior had declined. For instance, the leader of a gay men’s group said that the amount of information and condoms requested by the gay Honduran community had increased significantly since an AIDSCAP-supported NGO began aggressive education activities. Furthermore, mission officials stated that the AIDSCAP project had helped publicize HIV/AIDS, had encouraged the host government to begin to address the epidemic, and had established a network of NGOs that have the capacity to promote HIV/AIDS prevention activities. We met with a number of NGOs that, according to USAID officials, are competent and provide the key to sustaining activities after USAID funding ends. We also reviewed mission-level projects in three countries: India, the Philippines, and Zambia. In these countries, USAID missions designed their own projects and hired PVOs and other organizations to manage activities. AIDSCAP provided limited technical support to these missions. We found that most programs were successful, with the exception of Zambia, where problems significantly affected USAID’s ability to have an impact on the spread of the disease. Our review of HIV surveillance and behavioral survey data, visits to projects, and interviews with recipients of assistance indicate that USAID has made progress toward meeting its goal of reducing HIV transmission among target groups in the southern Indian state of Tamil Nadu (see fig. 2.1). The mission measured increased awareness about the disease and behavioral change as indicators of change in HIV transmission and reported progress in its target groups. USAID is accomplishing its objective by establishing and building a network of technically capable NGOs working to alter behaviors and increase STD treatment and condom distribution. At the time of our fieldwork, USAID had worked in only 1 of India’s 27 states, though USAID officials said they planned to expand to 1 other state, Maharashtra, because available funding did not permit USAID to develop comprehensive programs nationwide. However, other donors were active elsewhere in India. States, rather than the national government, manage health care delivery, and USAID chose Tamil Nadu and Maharashtra because they have a high percentage of HIV-positive people, and the state governments are politically and financially supportive of AIDS prevention efforts. of nonregular sex partners from 38 percent to 27 percent and increased condom use from 55 percent to 66 percent. Among factory workers, condom use increased from 28 percent to 41 percent. USAID has trained 800 volunteers, peer educators, and NGO leaders to implement community-based interventions and trained 60 health care providers in the diagnosis and management of STDs since 1992. In the Philippines, the USAID mission’s goals were to increase knowledge and to change attitudes and behaviors to prevent STD/AIDS infection among high-risk groups and to collect comprehensive baseline data on the incidence of HIV and behavior at 10 sites. Our review of an independent evaluation and discussions with target groups in the Philippines indicated that USAID interventions had been effective in increasing awareness and changing behavior. In addition, USAID’s surveillance activities provided data on HIV incidence and risky behavior among target group populations. An independent evaluation conducted in 1997 concluded that USAID’s activities helped avert an increase of HIV/AIDS, as the percentage of people who are HIV-positive remained below 1 percent in targeted groups. Behavioral surveys demonstrated that USAID activities to expand knowledge about the disease led commercial sex workers to increase their use of condoms. Data also indicated that male clients exposed to USAID interventions used condoms much more frequently than those with no contact with the project (75 percent compared to 41 percent). Our reviews of evaluations and interviews with NGO staff also indicated that USAID increased the capacity of the NGO staff to implement AIDS prevention activities. USAID project activities are carried out by staff working for 20 local organizations that have been trained as a result of USAID activities. We met with a number of NGOs that were successfully implementing prevention strategies under the guidance of USAID. For example, we accompanied a local NGO to a site frequented by gay men, where the NGO distributes pamphlets, discusses HIV/AIDS risks, and promotes condom use. Our review of USAID activities in Zambia indicated that the mission has had a difficult time developing an HIV/AIDS prevention program. Despite its problems designing an effective program, it did have some successes. Since 1992, the mission has redesigned its program three times with different goals and implementing organizations. Initially, the USAID mission in Zambia established a goal of reducing HIV transmission. It subsequently determined that this goal was unrealistic and refocused its objective on changing behavior in high-risk groups. USAID’s difficulty in developing a program stemmed, in part, from the national government’s transition to a decentralized approach to HIV/AIDS and health care delivery. However, according to USAID mission officials and an independent evaluation, problems occurred primarily because the U.S. educational institution managing USAID’s program did not have the necessary expertise to implement large-scale HIV/AIDS activities overseas. An evaluation of the project found a number of weaknesses, including a lack of project monitoring and a reliance on U.S.-based institutions to implement activities rather than building the capability of local NGOs. In addition, host government officials informed us the implementing agency designed and implemented activities without host country involvement. The evaluation also found that the project had not increased the number of patients treated for STDs, an important component of USAID’s HIV/AIDS strategy. Despite USAID’s management problems, we saw some successes in Zambia (see fig. 2.2). Our discussions with youth groups indicated an increased awareness of HIV/AIDS. USAID reported that condom sales exceeded expectations and increased by 22 percent in 2 years and that the number of casual sex partners in the target groups decreased. Additionally, USAID mission officials said that they had been instrumental in convincing the Zambian government to integrate HIV/AIDS activities into the national health plan and that they have had some successes in addressing one of the social and cultural factors that contribute to the spread of the disease. Specifically, USAID worked with traditional healers and the legal community to discourage a custom whereby recently widowed women engage in sexual relations to “cleanse” their bodies of the spirit of the deceased. USAID conducts financial oversight for its HIV/AIDS activities primarily through pre-award evaluations, quarterly financial reports, and annual financial audits of its private sector partners. Largely in response to congressional direction, USAID officials decided to rely on U.S.-based PVOs and indigenous NGOs to implement its HIV/AIDS program. To manage their private partners, USAID officials in headquarters and the field told us that they have chosen almost without exception to use a funding arrangement called a “cooperative agreement.” Cooperative agreements are similar to grant agreements but are used when agencies expect to be substantially involved in the activity to be carried out. These agreements allow USAID and recipients to easily adapt the scope of work and shift budgeted resources to changing needs. Therefore, they are able to adjust activities to meet agency goals without a formal process for review and approval. Recipient organizations have the primary responsibility for financial management. OIG officials said that there were no indications from audits conducted of systemic problems. OMB guidance outlines the responsibilities of awarding agencies and funding recipients under cooperative agreements. The guidance states that agencies should require organizations to have requisite financial and management systems in place; agree to comply with various requirements, such as guidelines for allowable costs; and provide procedures for periodic financial and progress reporting. With respect to monitoring, OMB’s general guidance is that while the agency has the responsibility to ensure that public funds are managed prudently, day-to-day financial management is the responsibility of the recipient. USAID project managers use several methods to ensure financial oversight: pre-award evaluations, quarterly expenditure reports, and annual audits. Pre-award evaluations are conducted as necessary before an award is granted, to assess whether prospective recipients have adequate financial and management control systems to properly manage, report, and account for USAID funds. If a recipient has recently received a federal award and is known to have the technical and financial capacity to perform the job, USAID conducts an informal review of its systems and controls. Otherwise, a team will go on-site to conduct a formal evaluation. We examined pre-award surveys for four headquarters projects and two mission bilateral projects. USAID conducted pre-award evaluations for all of them, and with the exception of the award to a local NGO in Honduras, they were informal reviews because the recipients were known to USAID. In Honduras, USAID conducted a formal evaluation because the NGO selected to manage the mission’s HIV/AIDS project after AIDSCAP ended did not have previous experience managing a USAID project. USAID found problems with the NGO’s accounting system, procurement and contracting procedures, and personnel management system. Before the award was made, the NGO was required to undertake corrective actions. Recipients of cooperative agreements are also required to provide quarterly expenditure reports to the USAID project manager. These are summaries of expenditures listed in categories such as salaries and travel. For the 6 years of the AIDSCAP project, we found that USAID reviewers approved all expenditure reports without disapproving any costs. OMB guidance stipulates that agencies must determine whether costs incurred are in accordance with terms of agreements and are reasonable and allowable. However, the guidance does not define the roles and responsibilities of an awarding agency for monitoring the recipient’s compliance with these standards. Project managers told us that they reviewed expenditure reports primarily to compare the level of funds expended with the progress toward completion of project activities. According to USAID officials, USAID uses annual financial audits required by the Single Audit Act as its principal tool for financial oversight. These audits are intended, among other things, to promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities such as PVOs. As such, they provide information to federal oversight officials and program managers on whether an entity’s financial statements are fairly presented and reasonable assurance on whether federal assistance programs are carried out in accordance with applicable laws and regulations. The single audit reports from 1992 to 1996 of the PVO that implemented the AIDSCAP project did not indicate any financial management or reporting problems. OIG reviews of these audits found that they were performed in accordance with the Single Audit Act’s requirements. In 1994, the OIG conducted an audit primarily focused on salaries, fringe benefits, and travel, based on specific allegations regarding these matters. As a result of this review, the OIG questioned 11 percent of the $14.6 million of expenditures examined. Following negotiations, the PVO repaid $540,000 to USAID. OIG officials said that there were no indications, from either this review or the single audits, that systemic problems existed. USAID has made important contributions in the fight against HIV/AIDS by helping to support the development and implementation of interventions that have been proven effective in the global fight against the disease. These interventions include information, education, and counseling; treatment of sexually transmitted diseases; and promotion of increased condom use through condom social marketing. At the country level, USAID implemented projects that increased awareness of the disease, reduced risky behaviors, and increased access to STD treatment and condoms. These actions have helped slow the spread of the disease in target groups. Evaluations of USAID’s largest HIV/AIDS project, AIDSCAP, determined that its activities had successes in the countries where it had projects. Our fieldwork and evaluations conducted for a number of other mission-led projects also showed important impacts. USAID implements its programs at headquarters and in the field primarily through PVOs and NGOs. To manage their private partners, USAID has chosen almost without exception to use a funding arrangement called a cooperative agreement. Because they are similar to grant agreements, cooperative agreements allow flexibility to USAID in adjusting their scope, and recipient organizations have the primary responsibility for financial management. USAID managers primarily rely on pre-award evaluations, review of quarterly expenditure reports, and annual audits for their financial oversight of its funding recipients. OIG officials said that there were no indications from audits conducted that systemic problems existed. USAID stated that it was pleased with the overall presentation and objectivity of the report. UNAIDS has made limited progress toward achieving its goal of leading a broad-based, expanded worldwide response to the HIV/AIDS epidemic. Reasons for the limited progress include a lack of clarity in the mission and roles of cosponsor agencies in the field and lack of staff accountability for theme group success. Cosponsor agency estimates of overall U.N. spending on HIV/AIDS show that resources have not increased with the creation of UNAIDS. In addition, while the UNAIDS Secretariat has made significant efforts at the international level to mobilize private sector support, Secretariat officials acknowledge that U.N. efforts at the local level have been limited. Data are not available to get an accurate measure of UNAIDS’ success in mobilizing an expanded response among donors or affected countries. In some countries, cosponsor agencies are just beginning to work together in theme groups. Finally, the UNAIDS Secretariat has not been very successful in providing technical assistance and other support to facilitate theme group activities and has only started to establish a framework to measure performance. The U.N. Economic and Social Council, which created UNAIDS, stated that the success of the program was dependent on the provision of increased resources for HIV/AIDS activities by the cosponsor agencies. U.N. agency spending began to decrease under WHO’s GPA, declining by 20.3 percent during the last 2 years of the program (1994-95). For the first 2 years since the creation of UNAIDS in 1996, cosponsor agencies estimate that the decline has leveled off, with spending at about $332 million—a slight decline from the $337 million spent during the last 2 years of GPA. Funding for HIV/AIDS-related activities remained stable even though overall cosponsor agency spending increased by 6.5 percent during the same period. Data in figure 3.1 demonstrate differences among cosponsor agencies that underlie the overall U.N. expenditure estimates for HIV/AIDS. Two agencies, UNDP and UNFPA, increased spending on HIV/AIDS by $10.8 million and $5.4 million, respectively, and UNESCO began programming money for HIV/AIDS after the creation of UNAIDS. However, the World Bank and UNICEF decreased funding by $10.5 million and $3.5 million, respectively.Spending on HIV/AIDS also declined as a percentage of these agencies’ budgets. Finally, WHO, the agency that spearheaded U.N. efforts to fight the HIV/AIDS epidemic in the early 1990s, with about $140 million added to its core budget every 2 years for HIV/AIDS activities, first began programming core funds following the creation of UNAIDS. It spent $16 million in 1996-97. WHO core funding support included in overall GPA figures. Unavailable. U.N. agency officials gave several reasons for the lack of increased spending on HIV/AIDS programs. A WHO official said that because WHO no longer had additional funding for its HIV/AIDS efforts after GPA ended, 200 professionals who had been working on the program left or changed jobs, and the agency had to reorganize its staff and budget to undertake HIV/AIDS activities. According to cosponsor officials and the Secretariat, other agencies did not increase support for HIV/AIDS due to difficulties incorporating HIV/AIDS activities into programs in the midst of their 5-year planning cycle; lack of commitment to HIV/AIDS by affected governments; and lack of commitment to HIV/AIDS as a priority on the part of field representatives. Building worldwide support for HIV/AIDS was a key objective of UNAIDS. The U.N. Secretary General noted that in order to achieve an expanded response, governments of countries most affected by the epidemic would have to increase resources for HIV/AIDS. Officials from the UNAIDS Secretariat also noted the importance of increasing the financial support of donor countries. However, the Secretariat is not yet in a position to measure USAIDs’ progress because it does not yet have baseline data on spending for HIV/AIDS at the country level. It has only recently developed baseline data for donors. The UNAIDS’ Secretariat is in the process of analyzing survey data to develop estimates of spending on HIV/AIDS by affected countries. Secretariat officials said that the data would be available in the fall of 1998. While half of the theme groups surveyed by UNAIDS reported that in 1997 they had mobilized resources at the country level, they noted that the large majority of these resources was from U.N. agencies. U.N. officials told us that the lack of data on the impact of HIV/AIDS, measured in the number of deaths and illnesses, made it difficult to persuade developing countries to divert limited national resources from other important health problems. In many developing countries, the numbers of deaths and the costs of caring for HIV/AIDS patients are not identifiable because records only indicate secondary causes of illness or death, such as pneumonia, rather than HIV/AIDS infection. Preliminary data from its most recent survey indicate that contributions by major donors have remained relatively stable between 1993 and 1996, at approximately $250 million a year. However, data are not available for 1997. Thus, it is not possible to determine whether UNAIDS’ first year’s efforts have led to increased spending by donors. A USAID official told us that Secretariat officials made regular visits to executive and parliamentary branches of governments in donor countries, including the United States, in an attempt to keep the spotlight on HIV/AIDS issues and avert “donor fatigue.” The UNAIDS Secretariat and cosponsor agencies were expected to mobilize the private sector as part of the comprehensive global response to the HIV/AIDS epidemic. Despite this objective, efforts have been limited at the country level, and overall results are not clear. UNAIDS officials reported they have made efforts to encourage support for HIV/AIDS activities in the international community. However, at the country level, cosponsor agencies had solicited private involvement in only one country we visited. Moreover, UNAIDS lacks data to determine whether the level of resources devoted to HIV/AIDS by the private sector has increased or decreased. Secretariat officials told us that they believe the level of private sector resources dedicated to HIV/AIDS activities has remained limited. In the international community, the UNAIDS Secretariat has encouraged private sector support through advocacy efforts with leading corporate organizations, such as The Conference Board and Rotary International, and individual companies. For example, the Secretariat organized a 1997 World Economic Forum plenary session in which South African President Nelson Mandela gave the keynote address to the world’s business leaders calling for a public/private partnership to fight HIV/AIDS. The Secretariat also organized a Public/Private Sector Partnership Strategy Meeting on International HIV/AIDS in London, England, in November 1996 and is working to establish a Global Business Council to organize businesses to serve as advocates in their industry and region. As a result of its efforts, the UNAIDS Secretariat has had some successes, particularly in advocating research and distribution of medical interventions appropriate in the developing world. According to a senior USAID official, the UNAIDS Secretariat and WHO should be credited with encouraging pharmaceutical companies to continue and increase their efforts to develop affordable HIV/AIDS vaccines. Glaxo Wellcome, a major pharmaceutical company, recently announced that it would provide zidovudine (AZT), a viral inhibitor, to pregnant, HIV-positive women in developing countries at a substantially reduced price. In addition, for more than 2 years, the UNAIDS Secretariat has been coordinating international research on mother-to-child transmission and addressing ways to implement clinical trials with the private sector, international agencies, and donor countries. USAID also credits the Secretariat with working with the private sector to increase the availability and affordability of the female condom. However, according to a report produced by the UNAIDS Secretariat and the Prince of Wales’ Business Leaders’ Forum, the corporate response to HIV/AIDS has generally been limited and largely defensive. With few exceptions, the business community around the world has not sought a leadership role in confronting the epidemic. Among the reasons for this lack of involvement are inadequate information on the disease and understanding about how it affects their companies, • unease about association with a controversial issue, lack of encouragement by the public sector, and • competition for resources for HIV/AIDS with other good causes. Unlike the Secretariat’s efforts with the international business community, in-country efforts by the cosponsor agencies to encourage private sector involvement in HIV/AIDS activities have been very limited. We saw examples of private, in-country activities that indicated that companies could play an important role in the U.N.’s efforts to reduce the spread of HIV/AIDS. For example, the theme group in India solicited free air time from an Indian television network and worked jointly to develop a media campaign involving national artists in on-air promotions and public events. We saw other private-sponsored activities such as companies in Honduras allowing government or NGO-sponsored HIV/AIDS prevention and control activities to occur within their place of business. Another example was in the Philippines, where a manufacturing company provided direct financial support for prevention activities. None of these was initiated by U.N. agencies. Several U.N. agency officials said that the reason for a lack of focus on private involvement in HIV/AIDS activities was that U.N. agencies did not generally work with the private sector. Their contacts in the field are almost exclusively with government ministries. Officials added that because the United Nations is not accustomed to working with private partners, guidance on best practices in this area would be useful. The UNAIDS Secretariat was expected to organize theme groups as the coordinating entity for U.N. activities in the field, and U.N. cosponsor agencies agreed to work together to ensure a unified response to HIV/AIDS. Their ultimate objective was to support host countries’ national HIV/AIDS programs. To operate effectively, agency representatives were expected to meet regularly to discuss opportunities for joint programming and assistance to the host country. We found such an example in the Dominican Republic where agencies met regularly and even conducted joint programming. However, Secretariat officials acknowledged that as of 1997 most theme groups were not working effectively and that they underestimated the difficulty of getting U.N. agencies to coordinate and conduct joint programming. For example, in two of the five countries we visited—Honduras and India—we found poorly functioning theme groups that rarely met. Preliminary results from a 1997 survey conducted by the UNAIDS Secretariat of theme groups showed that data received as of April 30, 1998, indicated that theme groups had made some progress in cosponsor coordination since their 1996 survey of theme groups, particularly in the areas of advocacy and resource mobilization. However, of the theme groups that responded to the 1997 survey, less than 50 percent were judged effective in those areas. In addition, while respondents said that the level of U.N. coordination at the country level had improved over the last year, only 28 percent rated it strong or better. Overall, fewer than half of the theme groups had undertaken efforts in 7 out of 10 of the key outputs measured. Several factors have hindered theme group operations, including the following: • Cosponsor agencies and the UNAIDS Secretariat did not provide guidance to staff in the field regarding how theme groups should operate and the scope of their mission. • Cosponsor agencies did not hold their staff accountable for theme group success, and UNAIDS Secretariat staff lacked authority to require participation. • Concerns about the concept of a joint program and theme group operations led to lack of commitment to working together on the part of some agency representatives. According to cosponsor agency officials, neither the Secretariat nor the cosponsors issued timely guidance to theme group participants about how to operate or about their roles and responsibilities within the theme groups. In a 1996 UNAIDS survey of theme group operations conducted by the UNAIDS Secretariat, U.N. officials in the field cited the lack of understanding about the roles of each agency at the country level and lack of support from cosponsor agencies and the Secretariat as major obstacles to progress. Acknowledging these problems, the Secretariat provided operational guidelines to theme groups early in 1998. The individual job expectations provided for U.N. cosponsor representatives in the field did not include an expectation to participate in the theme groups. Field staff with whom we met said that their annual personnel assessments did not mention participation in UNAIDS activities. The career, promotion, and reward paths for U.N. officials are through their parent organizations, and their work on UNAIDS activities was considered an adjunct to their regular duties. Typical of the responses we heard was a U.N. cosponsor agency official in Honduras who described UNAIDS work as “an add-on, an additional function outside of regular work responsibilities.” Secretariat representatives who were responsible for organizing theme groups and encouraging joint participation did not have the authority to require participation. Despite agreements by cosponsor agencies to support and work collaboratively in the theme groups, according to senior U.N. officials, concerns about the concept of a joint program held by some senior agency officials contributed to their lack of commitment to working together. Such concerns were reflected in a 1997 USAID survey of 31 of its overseas missions that addressed problems faced by U.N. agencies in planning and implementing their HIV/AIDS activities. Respondents cited uneven U.N. agency commitment to HIV/AIDS-related endeavors and the lack of coordination among U.N. agencies. In particular, some officials from the World Bank and WHO said that they questioned the role of UNAIDS as the organizing vehicle for the U.N. response. One WHO representative in the field said that because he works directly with the host government, he views UNAIDS as irrelevant. In addition, a World Bank official said he did not see the usefulness or relevancy of coordinating or integrating the Bank’s activities with other cosponsor agencies, noting that U.N. agencies were already doing all they could to address HIV/AIDS. The World Bank’s lack of commitment to the theme groups and UNAIDS was evident in a number of our case study countries where the World Bank had programs. Though the country representative of each U.N. cosponsor agency is automatically a member of the theme group and is expected to participate in its activities, in three of the five countries we visited, the World Bank representative never attended a theme group meeting, according to other cosponsor agency officials. However, in two of the countries, a lower-level staff member was present at a couple of theme group meetings. UNAIDS Secretariat officials said they recognized these problems and met with cosponsor agencies in March 1998 to address interagency cooperation and develop strategies to improve theme group coordination. One key role for the UNAIDS Secretariat was to provide technical assistance to theme groups to facilitate cosponsor agency efforts. The two Secretariat departments responsible for providing technical support and disseminating best practices accounted for 80 percent of the Secretariat’s budget. However, during our site visits, we found few U.N. agencies utilizing the Secretariat’s technical support, and some agency officials were unaware of the services or technical assistance that were available. For example, cosponsor agencies in the Philippines stated that best practice information is useful for introducing an idea to the government, but not particularly helpful in defining how to implement it. The UNDP representative said it would be useful to obtain information on how to incorporate HIV/AIDS prevention in their good governance projects. UNAIDS official acknowledged the Secretariat had poorly marketed available support and noted that its fixed menu of technical support was not always relevant or flexible enough to meet a country’s specific needs. In addition, because of the limited number of experts on the UNAIDS’ staff, he noted that the Secretariat should have made more of an effort to mobilize regional resources to provide technical assistance. Secretariat officials indicated substantial investments in this area will be needed in the future. Another key role of the UNAIDS Secretariat was to identify, develop, and function as a major source of information on best practices; that is, to identify and disseminate information about HIV/AIDS prevention policies and strategies and to promote research to develop new tools to address HIV/AIDS. According to cosponsor agency officials we interviewed, best practices information from the Secretariat was disseminated and read, but the information was too general to be of practical use and lacked practical “how-to” guidance. For example, according to a USAID official familiar with material on best practices produced by the Secretariat, the information provided a good summary and starting point for discussion of a particular issue, such as how to deal with AIDS in prisons. However, he noted that practitioners in the field, who are generally well informed, needed practical guidance on how to carry out specific projects. According to a Secretariat official, the focus was on producing the most up-to-date, comprehensive document on a particular issue but not to tailor best practices to meet the needs of officials in the field. He added that the department responsible for best practices needed to begin by improving its knowledge of customers’ needs so that it could make itself more relevant. According to Secretariat officials, steps are under way to address these deficiencies. For example, the Secretariat has reorganized the support departments and instituted management changes. Additionally, USAID stated that along with other bilateral donors, USAID is helping to establish a network of technical resources that can be used by Secretariat and cosponsor staff in-country staff to enhance the design and implementation of national HIV/AIDS programs. However, it is too early to evaluate the impact of these efforts. The Secretariat was directed by its governing board to coordinate the development of performance-based programming and measurable objectives. As an international organization, the United Nations is not required to comply with the U.S. Results Act. However, the act sets forth the characteristics of a performance-based system, requiring (1) the statement of a clearly defined mission; (2) the establishment of long-term strategic goals, as well as annual goals that are linked to them; (3) the measurement of performance against the goals; and (4) the public reporting of how well the agency is doing. Development of performance indicators will assist in making the Secretariat and the cosponsor agencies accountable for their performance, to gauge progress toward meeting objectives, to promote UNAIDS activities with host governments, and to generate information decisionmakers need in considering ways to improve performance. However, the Secretariat has been slow to create and implement an evaluation framework that employs performance indicators. Despite being instructed to start efforts immediately, it did not begin staffing an evaluation unit until September 1997. According to the Secretariat’s evaluation officer, the goal is to field-test a performance-based evaluation system in 20 to 30 countries by the end of 1998. Secretariat officials attribute the slow start in developing performance indicators to the rush to get UNAIDS up and running programmatically and country-level activities under way. Results from the theme group survey covering 1997 activities showed that, where theme groups had developed an integrated U.N. work plan, only 22 percent had developed indicators to measure progress, and only 13 percent had assessed their performance using the indicators. USAID officials noted that the lack of a credible monitoring and evaluation plan by the UNAIDS Secretariat is a significant weakness. Officials added that at the May 1998 meeting of UNAIDS’ governing board, a Monitoring and Evaluation Technical Review Group was created. This group is expected to develop a plan for approval by the board at its next meeting, scheduled for December 1998. Although we did not conduct an evaluation of individual cosponsor HIV/AIDS activities, in the countries we visited we observed innovative cosponsor activities in each of our case study countries. U.N. agencies relied on proven control and prevention activities such as condom education and promotion, information and behavioral change communication, and treatment of STDs. In addition, the activities were targeted to high-risk groups (such as commercial sex workers and truckers), individuals who engage in high-risk activity (clients of commercial sex workers, men who have sex with men, intravenous drug users), and those considered particularly vulnerable (women and youths). Moreover, the activities we observed were generally inexpensive, ranging from $200 to several thousand dollars. In addition, in an effort to increase sustainability, the activities were often managed by host country officials and implemented by locally recruited activists. While many developing countries remain dependent on external donor support to finance HIV/AIDS activities, a cadre of trained and experienced HIV/AIDS activists existed in all the countries. Particularly noteworthy was the use of peer educators—such as commercial sex workers and intravenous drug users—who are able to reach and communicate effectively with at-risk populations who normally fall outside the reach of government-sponsored public health programs. Examples of intervention activities we observed in our case study countries include the following: In the Dominican Republic, an adolescent peer educator training session and a prison AIDS awareness workshop were funded by joint contributions from all the theme group members. In Honduras, a street theater organization conducted HIV/AIDS awareness skits at schools and festivals and during half-time at professional and amateur soccer matches (see fig. 3.2). In the Philippines, commercial sex workers and men-who-have-sex-with-men peer educators provided counseling, information packets, and condoms in brothels and locales frequented by individuals who engage in high-risk behavior. In India, the first HIV testing center in New Delhi was developed, providing free voluntary testing; counseling services; dissemination of information about HIV/AIDS, STDs, and condom use; support and care services for HIV-positive clients; and advocacy and sensitization about the rights and needs of HIV-positive individuals. In Zambia, a pilot project for home-based care mobilized community groups to deal with the consequences of the HIV/AIDS epidemic, including (1) educating the community about HIV/AIDS; (2) caring for orphans, the chronically ill, and the dying; and (3) developing income-generating projects for women, orphans, and people living with AIDS. UNAIDS has made limited progress toward achieving its goal of leading a broad-based, expanded, worldwide response to the HIV/AIDS epidemic. Cosponsor agency estimates of overall U.N. spending on HIV/AIDS show that resources have not increased with the creation of UNAIDS, as was expected. Agency spending on HIV/AIDS began declining before the creation of UNAIDS in 1996 and since then has leveled off, despite an increase in overall cosponsor agency spending of 6.5 percent. Building worldwide support for HIV/AIDS was a key objective of UNAIDS. However, the UNAIDS Secretariat is not able to measure progress in meeting this goal because it does not yet have baseline data on spending on HIV/AIDS at the country level and has only recently developed baseline data for contributions by donor countries. Secretariat officials said that spending estimates for affected countries should be available in the fall of 1998. In addition, the Secretariat lacks data to determine whether the level of private sector resources directed to HIV/AIDS has increased or decreased. While the UNAIDS Secretariat has made significant efforts at the international level to mobilize private sector support, we found that U.N. efforts at the local level were very limited in the countries we visited. Secretariat officials acknowledged that as of 1997, most theme groups were not working effectively and that they underestimated the difficulty of getting U.N. agencies to coordinate and conduct joint programming. For example, in two of the five countries we visited, we found poorly functioning theme groups that rarely met. Factors that hindered theme group operations included insufficient guidance to staff in the field regarding how theme groups should operate, not holding staff accountable for theme group success, and U.N. agency staff’s lack of commitment to working together. In addition, the UNAIDS Secretariat has not been very successful in providing technical assistance and other support to facilitate theme group activities and has only started to establish performance measures. Despite UNAIDS’ difficulties, we observed innovative and U.N. agency intervention projects in each of our case study countries. Comments from USAID, the Department of State, and the UNAIDS Secretariat generally focused on concerns about our review of UNAIDS. USAID stated that it shares our concerns about areas in which UNAIDS has not made sufficient progress. However, USAID expressed its strong endorsement and support for the program and the unique role UNAIDS plays in the global response to HIV/AIDS. USAID also pointed to the difficulty of UNAIDS’ mandate and UNAIDS’ relatively short existence (2 years at the time of our review). USAID stated that progress had been made in some areas since our review. For example, USAID noted that at a recent meeting, a Monitoring and Evaluation Technical Review Group was created to develop a monitoring and evaluation plan targeted for December 1998. UNAIDS Secretariat officials agreed with our conclusion that U.N. expenditures for HIV/AIDS did not substantially increase since the creation of UNAIDS. However, it questioned the quality of the financial data reported by the cosponsor agencies because agencies have difficulty estimating expenditures and use different methods of reporting. The Secretariat stated that relying on financial expenditures alone masks the increased expenditures of human resources on HIV/AIDS by cosponsor agencies in many countries. The Secretariat stated that progress has been made toward mobilizing the private sector and coordinating efforts at the country level, providing support to theme groups, and developing a framework for measuring the progress of the U.N. effort on HIV/AIDS was reasonable given the challenges it faced and the short time since the creation of UNAIDS. The Secretariat provided updated information on activities undertaken after we completed our fieldwork. Our conclusion about the decline in U.N. spending on HIV/AIDS is based on data reported by the respective cosponsor agencies. We recognize that agencies use different methods to report expenditures and that it is difficult to estimate expenditures, particularly when HIV/AIDS expenditures are integrated into spending for other activities. However, because each agency has reported the data in a consistent manner over time, we believe that the data are useful to identify trends. We also agree with the Secretariat that adding other measures of the U.N. effort, such as human resources, would be useful. However, the Secretariat does not currently have an evaluation and monitoring system to measure non-financial contributions to HIV/AIDS. We did not make a judgment about whether cosponsor agencies should have made more progress toward mobilizing the private sector. The concern we raised in the report was less about the level of private involvement than the fact that cosponsor agencies in all but one of the countries we visited were not making efforts to involve the private sector. We acknowledge in the report that theme groups have made some progress since the Secretariat’s 1996 survey; it was conducted the same year that most theme groups were established, so some progress would be expected. However, the 1997 survey indicated that half or fewer of the theme groups had undertaken efforts in 7 out of 10 of the key outputs measured. We also note that despite being instructed by its governing board to immediately begin developing an evaluation and monitoring plan, the Secretariat did not hire staff to develop the plan until a year and a half after UNAIDS was established. USAID, State, and the UNAIDS Secretariat also noted that UNAIDS has only been in existence for 2-1/2 years and were concerned that it may have been too early to assess the program. State also said it was disappointed at the very negative tone in the report concerning UNAIDS’ activities and believed that the report did not give any credit to UNAIDS for what it had achieved. Furthermore, State said that we implied that U.N. agencies and the U.S. government should stop supporting UNAIDS. While we recognize that UNAIDS has been in existence for only 2-1/2 years, we did not evaluate the program’s impact on the HIV/AIDS epidemic. In our report, we present the facts as we found them to be, including areas needing improvement and areas that have worked well. In fact, the report specifically identifies UNAIDS’ accomplishments, including information on innovative grassroots interventions. Also, we did not evaluate whether support for UNAIDS should be continued. Our objective, as stated in the report, was to examine the program’s progress, since its inception, in meeting established objectives such as increasing resources devoted to HIV/AIDS and working together in theme groups at the country level.
Pursuant to a congressional request, GAO reviewed the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) prevention activities of the Agency for International Development (AID) and the United Nations' (U.N.) Joint Program on HIV/AIDS (UNAIDS), focusing on the: (1) contributions AID has made to the global effort to prevent AIDS and the methods AID uses to provide financial oversight over its AIDS prevention activities; and (2) extent to which the United Nations has met its goal of leading an expanded and broad-based, worldwide response to the HIV/AIDS epidemic. GAO noted that: (1) AID has made important contributions to the fight against HIV/AIDS; (2) AID-supported research helped to identify interventions proven to curb the spread of HIV/AIDS that have become the basic tools for the international response to the epidemic; (3) applying these interventions, AID projects have increased awareness of the disease; changed risky behaviors; and increased access to treatment of sexually transmitted diseases and to condoms, which have helped slow the spread of the disease in the target groups; (4) under the terms of cooperative agreements with private implementing organizations, AID managers are expected to closely monitor projects, but the major responsibility for internal financial management and control rests with recipient organizations; (5) AID's financial oversight primarily consists of conducting preaward evaluations of prospective funding recipients, reviewing quarterly expenditure reports, and requiring audits; (6) officials from AID's Office of Inspector General said that there were no indications of systemic problems from audits conducted; (7) in its first 2 years of operation, the U.N. has made limited progress in achieving its goal of leading a broad-based, expanded global effort against HIV/AIDS; (8) while data indicate that spending by the cosponsors has not increased, data are not yet available to measure the U.N.'s progress in increasing spending by donor countries, the private sector, or affected countries; (9) moreover, theme groups, the forum for coordinating U.N. efforts in the field, have had a difficult start and, in some countries, cosponsor agencies are just beginning to work together; (10) the UNAIDS Secretariat has not been successful in providing technical assistance and other support to facilitate theme group activities and performance measures for the U.N.'s HIV/AIDS programs; and (11) despite the U.N.'s limited progress in meeting its objectives, GAO observed innovative and low-cost activities that were implemented by cosponsor agencies.
According to FPS officials, the agency has required its guards to receive training on how to respond to an active-shooter scenario since 2010. However, as our 2013 report shows, FPS faces challenges providing active-shooter response training to all of its guards. We were unable to determine the extent to which FPS’s guards have received active-shooter response training, in part, because FPS lacks a comprehensive and reliable system for guard oversight (as discussed below). When we asked officials from 16 of the 31 contract guard companies we contacted if their guards had received training on how to respond during active-shooter incidents, responses varied.companies we interviewed about this topic: For example, of the 16 contract guard officials from eight guard companies stated that their guards had received active-shooter scenario training during FPS orientation; officials from five guard companies stated that FPS had not provided active-shooter scenario training to their guards during the FPS- provided orientation training; and officials from three guard companies stated that FPS had not provided active-shooter scenario training to their guards during the FPS- provided orientation training, but that the topic was covered at some other time. Without ensuring that all guards receive training on how to respond to active-shooter incidents, FPS has limited assurance that its guards are prepared for this threat. According to FPS officials, the agency provides guards with information on how they should respond during an active-shooter incident as part of the 8-hour FPS-provided orientation training. FPS officials were not able to specify how much time is devoted to this training, but said that it is a small portion of the 2-hour special situations training. According to FPS’s training documents, this training includes instructions on how to notify law enforcement personnel, secure the guard’s area of responsibility, and direct building occupants according to emergency plans as well as the appropriate use of force. As part of their 120 hours of FPS-required training, guards must receive 8 hours of screener training from FPS on how to use x-ray and magnetometer equipment. However, in our September 2013 report, we found that FPS has not provided required screener training to all guards. Screener training is important because many guards control access points at federal facilities and thus must be able to properly operate x-ray and magnetometer machines and understand their results. In 2009 and 2010, we reported that FPS had not provided screener training to 1,500 contract guards in one FPS region. In response to those reports, FPS stated that it planned to implement a program to train its inspectors to provide screener training to all its contract guards by September 2015. Information from guard companies we contacted indicate that guards who have never received this screener training continue to be deployed to federal facilities. An official at one contract guard company stated that 133 of its approximately 350 guards (about 38 percent) on three separate FPS contracts (awarded in 2009) have never received their initial x-ray and magnetometer training from FPS. The official stated that some of these guards are working at screening posts. Officials at another contract guard company in a different FPS region stated that, according to their records, 78 of 295 (about 26 percent) guards deployed under their contract have never received FPS’s x-ray and magnetometer training. These officials stated that FPS’s regional officials were informed of the problem, but allowed guards to continue to work under this contract, despite not having completed required training. Because FPS is responsible for this training, according to guard company officials, no action was taken against the company. Consequently, some guards deployed to federal facilities may be using x- ray and magnetometer equipment that they are not qualified to use─thus raising questions about the ability of some guards to execute a primary responsibility to properly screen access control points at federal facilities. In our September 2013 report, we found that FPS continues to lack effective management controls to ensure that guards have met training and certification requirements. For example, although FPS agreed with our 2012 recommendations to develop a comprehensive and reliable system to oversee contract guards, it still has not established such a system. Without a comprehensive guard management system, FPS has no independent means of ensuring that its contract guard companies have met contract requirements, such as providing qualified guards to federal facilities. Instead, FPS requires its guard companies to maintain files containing guard-training and certification information. The companies are then required to provide FPS with this information each month. In our September 2013 report, we found that 23 percent of the 276 guard files we reviewed (maintained by 11 of the 31 guard companies we interviewed) lacked required training and certification documentation. As shown in table 1, some guard files lacked documentation of basic training, semi-annual firearms qualifications, screener training, the 40-hour refresher training (required every 3 years), and CPR certification. FPS has also identified guard files that did not contain required documentation. FPS’s primary tool for ensuring that guard companies comply with contractual requirements for guards’ training, certifications, and qualifications is to review guard companies’ guard files each month. From March 2012 through March 2013, FPS reviewed more than 23,000 guard files. It found that a majority of the guard files had the required documentation but more than 800 (about 3 percent) did not. FPS’s file reviews for that period showed files missing, for example, documentation for screener training, initial weapons training, CPR certification, and firearms qualifications. As our September 2013 report explains, however, FPS’s process for conducting monthly file reviews does not include requirements for reviewing and verifying the results, and we identified instances in which FPS’s monthly review results did not accurately reflect the contents of guard files. For instance, FPS’s review indicated that required documentation was present for some guard files, but for some of those files we were not able to find (for example) documentation of training and certification, such as initial weapons training, DHS orientation, and pre- employment drug screenings. As a result of the lack of management controls, FPS is not able to provide reasonable assurance that guards have met training and certification requirements. We found in 2012 that FPS did not assess risks at the 9,600 facilities under the control and custody of GSA in a manner consistent with federal standards, although federal agencies paid FPS millions of dollars to assess risk at their facilities. Our March 2014 report examining risk assessments at federal facilities found that this is still a challenge for FPS and several other federal agencies. Federal standards such as the National Infrastructure Protection Plan’s (NIPP) risk management framework and ISC’s RMP call for a risk assessment to include a threat, vulnerability, and consequence assessment. Risk assessments help decision-makers identify and evaluate security risk and implement protective measures to mitigate risk. Moreover, risk assessments play a critical role in helping agencies tailor protective measures to reflect their facilities’ unique circumstances and enable them to allocate security resources effectively. Instead of conducting risk assessments, FPS uses an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST), with which it assesses federal facilities until it develops a longer- term solution. According to FPS, MIST allows it to resume assessing federal facilities’ vulnerabilities and recommend countermeasures— something FPS has not done consistently for several years. MIST has some limitations. Most notably, it does not assess consequence (the level, duration, and nature of potential loss resulting from an undesirable event). Three of the four risk assessment experts we spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. FPS officials stated that it intends to eventually incorporate consequence into its risk assessment methodology and is exploring ways to do so. MIST was also not designed to compare risks across federal facilities. Consequently, FPS does not have the ability to comprehensively manage risk across its portfolio of 9,600 facilities and recommend countermeasures to federal tenant agencies. As of April 2014, according to an FPS official, FPS had used MIST to complete vulnerability assessments of approximately 1,200 federal facilities in fiscal year 2014 and have presented approximately 985 of them to the facility security committees. The remaining 215 assessments were under review by FPS. FPS has begun several initiatives that, once fully implemented, should enhance its ability to protect the more than 1 million federal employees and members of the public who visit federal facilities each year. Since fiscal year 2010, we have made 31 recommendations to help FPS address its challenges with risk management, oversight of its contract guard workforce, and its fee-based funding structure. DHS and FPS have generally agreed with these recommendations. As of May 2014, as shown in table 2, FPS had implemented 6 recommendations, and was in the process of addressing 10 others, although none of the 10 have been fully implemented. The remaining 15 have not been implemented. According to FPS officials, the agency has faced difficulty in implementing many of our recommendations because of changes in its leadership, organization, funding, and staffing levels. For further information on this testimony, please contact Mark Goldstein at (202) 512-2834 or by email at GoldsteinM@gao.gov. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Geoff Hamilton; Jennifer DuBord; and SaraAnn Moessbauer. Federal Facility Security: Additional Actions Needed to Help Agencies Comply with Risk Assessment Methodology Standards. GAO-14-86. Washington, D.C.: March 5, 2014. Homeland Security: Federal Protective Service Continues to Face Challenges with Contract Guards and Risk Assessments at Federal Facilities. GAO-14-235T. Washington, D.C.: December 17, 2013. Homeland Security: Challenges Associated with Federal Protective Service’s Contract Guards and Risk Assessments at Federal Facilities. GAO-14-128T. Washington, D.C.: October 30, 2013. Federal Protective Service: Challenges with Oversight of Contract Guard Program Still Exist, and Additional Management Controls Are Needed. GAO-13-694. Washington, D.C.: September 17, 2013. Facility Security: Greater Outreach by DHS on Standards and Management Practices Could Benefit Federal Agencies. GAO-13-222. Washington, D.C.: January 24, 2013. Federal Protective Service: Actions Needed to Assess Risk and Better Manage Contract Guards at Federal Facilities. GAO-12-739. Washington, D.C.: August 10, 2012. Federal Protective Service: Actions Needed to Resolve Delays and Inadequate Oversight Issues with FPS’s Risk Assessment and Management Program. GAO-11-705R. Washington, D.C.: July 15, 2011. Federal Protective Service: Progress Made but Improved Schedule and Cost Estimate Needed to Complete Transition. GAO-11-554. Washington, D.C.: July 15, 2011. Homeland Security: Protecting Federal Facilities Remains a Challenge for the Department of Homeland Security’s Federal Protective Service. GAO-11-813T. Washington, D.C.: July 13, 2011. Federal Facility Security: Staffing Approaches Used by Selected Agencies. GAO-11-601. Washington, D.C.: June 30, 2011. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security, GAO-11-492. Washington, D.C.: May 20, 2011. Homeland Security: Addressing Weaknesses with Facility Security Committees Would Enhance Protection of Federal Facilities, GAO-10-901. Washington, D.C.: August 5, 2010. Homeland Security: Preliminary Observations on the Federal Protective Service’s Workforce Analysis and Planning Efforts. GAO-10-802R. Washington, D.C.: June 14, 2010. Homeland Security: Federal Protective Service’s Use of Contract Guards Requires Reassessment and More Oversight. GAO-10-614T. Washington, D.C.: April 14, 2010. Homeland Security: Federal Protective Service’s Contract Guard Program Requires More Oversight and Reassessment of Use of Contract Guards. GAO-10-341. Washington, D.C.: April 13, 2010. Homeland Security: Ongoing Challenges Impact the Federal Protective Service’s Ability to Protect Federal Facilities. GAO-10-506T. Washington, D.C.: March 16, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Preliminary Results Show Federal Protective Service’s Ability to Protect Federal Facilities Is Hampered by Weaknesses in Its Contract Security Guard Program, GAO-09-859T. Washington, D.C.: July 8, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent incidents at federal facilities demonstrate their continued vulnerability to attacks or other acts of violence. As part of the Department of Homeland Security (DHS), FPS is responsible for protecting federal employees and visitors in approximately 9,600 federal facilities under the control and custody the General Services Administration (GSA). To help accomplish its mission, FPS conducts facility security assessments and has approximately 13,500 contract security guards deployed to federal facilities. FPS charges fees for its security services to federal tenant agencies. This testimony discusses challenges FPS faces in (1) ensuring contract security guards deployed to federal facilities are properly trained and certified and (2) conducting risk assessments at federal facilities. It is based on GAO reports issued from 2009 through 2014 on FPS's contract guard and risk assessment programs. To perform this work, GAO reviewed FPS and guard company data and interviewed officials about oversight of guards. GAO compared FPS's and eight federal agencies' risk assessment methodologies to ISC standards that federal agencies must use. GAO selected these agencies based on their missions and types of facilities. GAO also interviewed agency officials and 4 risk management experts about risk assessments. The Federal Protective Service continues to face challenges ensuring that contract guards have been properly trained and certified before being deployed to federal facilities around the country. In September 2013, for example, GAO reported that providing training for active shooter scenarios and screening access to federal facilities poses a challenge for FPS. According to officials at five guard companies, their contract guards have not received training on how to respond during incidents involving an active shooter. Without ensuring that all guards receive training on how to respond to active-shooter incidents at federal facilities, FPS has limited assurance that its guards are prepared for this threat. Similarly, an official from one of FPS's contract guard companies stated that 133 (about 38 percent) of its approximately 350 guards have never received screener training. As a result, guards deployed to federal facilities may be using x-ray and magnetometer equipment that they are not qualified to use raising questions about their ability to fulfill a primary responsibility of screening access control points at federal facilities. GAO was unable to determine the extent to which FPS's guards have received active-shooter response and screener training, in part, because FPS lacks a comprehensive and reliable system for guard oversight. GAO also found that FPS continues to lack effective management controls to ensure its guards have met its training and certification requirements. For instance, although FPS agreed with GAO's 2012 recommendations that it develop a comprehensive and reliable system for managing information on guards' training, certifications, and qualifications, it still does not have such a system. Additionally, 23 percent of the 276 contract guard files GAO reviewed did not have required training and certification documentation. For example, some files were missing items such as documentation of screener training, CPR certifications, and firearms qualifications. Assessing risk at federal facilities remains a challenge for FPS. GAO found in 2012 that federal agencies pay FPS millions of dollars to assess risk at their facilities, but FPS is not assessing risks in a manner consistent with federal standards. In March 2014, GAO found that this is still a challenge for FPS and several other agencies. The Interagency Security Committee's (ISC) Risk Management Process for Federal Facilities standard requires federal agencies to develop risk assessment methodologies that, among other things, assess the threat, vulnerability, and consequence to undesirable events. Risk assessments help decision-makers identify and evaluate security risks and implement protective measures. Instead of conducting risk assessments, FPS uses an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST) to assess federal facilities until it develops a longer-term solution. However, MIST does not assess consequence (the level, duration, and nature of potential loss resulting from an undesirable event). Three of the four risk assessment experts GAO spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. Thus, FPS has limited knowledge of the risks facing about 9,600 federal facilities around the country. FPS officials stated that consequence information in MIST was not part of the original design, but they are exploring ways to incorporate it. Since fiscal year 2010, GAO has made 31 recommendations to improve FPS's contract guard and risk assessment processes, of which 6 were implemented, 10 are in process, and 15 have not been implemented.
E/M office visits are frequently performed services during which a physician or other provider assesses a patient’s health and begins managing his or her care. These services are predominantly performed in two settings—physician offices and HOPDs. Medicare FFS paid for approximately 250 million E/M office visits in 2013. Under Medicare’s payment policy, Medicare’s total payment rate is higher when an E/M office visit is provided in an HOPD rather than in a physician office. When the service is provided in a physician office, Medicare makes a single payment to the physician at Medicare’s physician fee schedule non-facility rate. When the service is provided in an HOPD, Medicare makes two payments—one payment at the physician fee schedule facility rate and another payment to the hospital, typically at the hospital outpatient prospective payment system (OPPS) rate. The total of these two payment rates is higher than Medicare’s total payment rate when the service is provided in a physician office. For example, in 2013, the total Medicare payment rate for a mid-level E/M office visit for an established patient—billed under Healthcare Common Procedure Coding System (HCPCS) code 99213—was $51 higher when the service was performed in an HOPD instead of a physician office (see table 1). While CMS modified the manner in which Medicare pays for E/M office visits after 2013, large differences in total payment rates continue to exist for E/M office visits. Beginning in 2014, CMS made the OPPS payment rate the same for all the HCPCS codes for E/M office visits. However, the new uniform OPPS payment rate combined with the physician fee schedule facility payment rate for E/M office visits provided in HOPDs continues to exceed the payment rate for the same services performed in physician offices. For example, in 2015, Medicare’s total payment rate for E/M office visits ranged from $58 to $86 higher when performed in an HOPD compared to a physician office, depending on the specific HCPCS code billed. Many other services, such as imaging and surgical services, are also reimbursed at a higher rate by Medicare when performed in HOPDs versus other settings. For example, Medicare’s total payment rate for magnetic resonance imaging of the lumbar spine without dye (HCPCS code 72148) was about $29 higher when performed in an HOPD compared to a physician office in 2013. Furthermore, Medicare’s total payment rate for cataract surgery (HCPCS code 66984) was about $760 higher when performed in an HOPD compared to an ambulatory surgical center in 2013. Some industry groups argue that higher payment rates for services performed in HOPDs are justified because hospitals treat sicker patients, incur higher costs due to the need to furnish emergency services, and provide services that are unavailable elsewhere in the community for vulnerable populations, such as those dually eligible for Medicare and Medicaid. However, in separate reports, MedPAC and the Department of Health and Human Services (HHS) Office of Inspector General have recommended or suggested that Congress eliminate or reduce differences in Medicare total payment rates across settings for various services, including E/M office visits, imaging services, and surgical services. To date, legislation fully addressing these recommendations has not been enacted. Recent research suggests that hospitals and physicians are increasingly vertically consolidated, which allows services to shift from physician offices to HOPDs. When hospitals and physicians vertically consolidate, the hospital-owned practice must meet certain criteria to gain what is known as provider-based status, which allows the hospital to bill the HOPD rate, thereby increasing Medicare’s total payment rate for the same service. For example, the physician practice and hospital must be financially and clinically integrated. Further, although exceptions exist, physician practices are generally required to be within 35 miles of the hospital to gain provider-based status. If a practice meets these conditions, Medicare’s total payment rate for the same service can be substantially higher despite the fact that the practice’s location, the physicians who practice there, and the beneficiaries served could be the same as before consolidation occurred. Our analysis of AHA survey data shows that from 2007 through 2013, the number of vertically consolidated hospitals increased by 21 percent. Specifically, out of the approximately 4,700 surveyed hospitals included in our study, 1,408 or 30 percent of the hospitals reported having a vertical consolidation arrangement with physicians in 2007. This number increased to 1,707 or 36 percent in 2013—an average annual increase of 3.3 percent (see fig. 1). In addition, AHA survey data also show that the number of vertically consolidated physicians nearly doubled between 2007 and 2013, with faster growth toward the end of this time period. Specifically, the number of these physicians increased from over 95,000 in 2007 to almost 182,000 in 2013—an average annual increase of 11.3 percent (see fig. 1). From 2010 to 2013, the number of vertically consolidated physicians grew at an average annual rate of 13.9 percent, compared to a rate of 8.8 percent from 2007 to 2010. Although the increase in the number of vertically consolidated physicians occurred across a broad range of hospitals from 2007 through 2013, relatively few hospitals accounted for a large number of these physicians. AHA’s survey data show that the number of vertically consolidated physicians increased across all regions of the country; in both urban and rural areas; and among hospitals of different sizes. However, relatively few hospitals accounted for a large number of vertically consolidated physicians. For example, the 372 out of 1,707 vertically consolidated hospitals that had more than 100 vertically consolidated physicians accounted for 84 percent of all vertically consolidated physicians but only 22 percent of vertically consolidated hospitals in 2013 (see fig. 2). Researchers and industry representatives whom we interviewed offered numerous potential explanations for the recent increases in vertical consolidation. Some stated that the trend could partially be explained by higher Medicare payment rates for services performed in HOPDs compared to other settings, the desire among some hospitals to gain market share, and changes in health care payment and delivery systems. For example, accountable care organizations, bundled payment models, and Medicare’s Hospital Readmissions Reduction Program—which penalizes hospitals for high rates of readmissions—provide incentives to vertically consolidate in order to improve care for beneficiaries, maximize payments, and minimize penalties. Researchers and industry representatives whom we interviewed also mentioned that the increasing challenges associated with managing a private physician practice, including financial and regulatory burdens, could also explain some of the increase in vertical consolidation. Some of these researchers and representatives added that hospitals and physicians may be vertically consolidating to enhance care coordination and improve efficiency. The percentage of E/M office visits—as well as the number of E/M office visits per beneficiary—performed in HOPDs, rather than physician offices, was generally higher in counties with higher levels of vertical consolidation in 2007 through 2013. The beneficiaries from counties with relatively high levels of vertical consolidation were not sicker, on average, than beneficiaries in counties with lower levels of consolidation. Our analysis of AHA and Medicare claims data shows that the percentage of E/M office visits performed in HOPDs was generally higher in counties with higher levels of vertical consolidation in 2013. Specifically, after dividing counties into five equal groups based on their 2013 level of consolidation, we found that the median percentage of E/M office visits performed in HOPDs in the group of counties with the lowest levels of vertical consolidation was 4.1 percent. In contrast, this rate was 14.1 percent for the counties with the highest levels of consolidation (see fig. 3). For years 2007 to 2012, the percentage of E/M office visits performed in HOPDs was also generally higher in counties with higher levels of vertical consolidation, though the association was weaker compared to 2013. For example, the median percentage of E/M office visits performed in HOPDs in the group of counties with the lowest level of vertical consolidation was 3.9 percent in 2007, compared to a median of 7.3 percent in the counties with the highest levels of consolidation. As part of our analysis, we also calculated the number of E/M office visits in each county on a per beneficiary basis. We found that the number of E/M office visits performed in HOPDs per 100 Medicare beneficiaries was also generally higher in counties with higher levels of vertical consolidation each year from 2007 through 2013. For example, in 2013 the number of E/M office visits performed in HOPDs per 100 beneficiaries was 26 for the counties with low levels of vertical consolidation, whereas the number was substantially higher—82 services per 100 beneficiaries— in counties with the highest level of vertical consolidation. We found similar correlations in 2007 through 2012. (See app. III for additional analyses of the number of E/M office visits performed in HOPDs in counties with different levels of vertical consolidation from 2007 through 2013.) The association we found between higher levels of vertical consolidation and higher utilization of E/M office visits in HOPDs remained even after controlling for differences in county-level characteristics and other market factors that could affect the setting in which E/M office visits are performed. Specifically, we developed a regression model that controlled for county characteristics that do not change over relatively short periods of time, such as whether a county is urban or rural, and county characteristics that could change over time, such as the level of competition among hospitals and physicians within counties. Our regression model’s results were similar to our initial results: the level of vertical consolidation in a county was significantly and positively associated with a higher number and percentage of E/M office visits performed in HOPDs—that is, as vertical consolidation increased in a given county, the number and percentage of E/M office visits performed in HOPDs in that county also tended to be higher. (See app. I and app. II for more information on our regression model and results.) Beneficiaries from counties with higher levels of vertical consolidation were not sicker, on average, than beneficiaries from counties with lower levels of consolidation. Specifically, beneficiaries from counties with higher levels of vertical consolidation tended to have either similar or slightly lower median risk scores, death rates, rates of end-stage renal disease, and rates of disability compared to those from counties with lower levels of consolidation (see table 2). Further, counties with higher levels of consolidation had a lower percentage of beneficiaries dually eligible for Medicaid, who tend to be sicker and have higher Medicare spending than Medicare beneficiaries who are not dually eligible for Medicaid. This suggests that areas with higher E/M office visit utilization in HOPDs are not composed of sicker than average beneficiaries. As we previously stated, the extent of vertical consolidation grew from 2007 through 2013. Coinciding with that growth, we found that E/M office visits were performed more frequently in the higher paid HOPD setting in counties with higher levels of vertical consolidation. Such excess payments are inconsistent with Medicare’s role as an efficient purchaser of health care services. According to CMS, the agency does not have the statutory authority to equalize total payment rates between HOPDs and physician offices. Further, CMS lacks the authority to return the associated savings to the Medicare program. Therefore, absent legislative intervention, the Medicare program will likely pay more than necessary for E/M office visits. From 2007 through 2013, the number of vertically consolidated physicians nearly doubled, with faster growth in more recent years. Regardless of what has driven hospitals and physicians to vertically consolidate, paying substantially more for the same service when performed in an HOPD rather than a physician office provides an incentive to shift services that were once performed in physician offices to HOPDs after consolidation has occurred. Our findings suggest that providers responded to this financial incentive: E/M office visits were more frequently performed in HOPDs in counties with higher levels of vertical consolidation. We found this association in both our analysis of E/M office visit utilization in counties with varying levels of vertical consolidation and in our regression analyses. Further, our analysis of 2013 health status data suggests that beneficiaries from counties with higher levels of vertical consolidation, where we found more E/M office visits performed in HOPDs, were not sicker, on average, than beneficiaries who lived in counties with lower levels of consolidation, where we found fewer E/M office visits performed in HOPDs. While vertical consolidation has potential benefits, we found that the rise in vertical consolidation exacerbates a financial vulnerability in Medicare’s payment policy: Medicare pays different rates for the same service, depending on where the service is performed. Although Medicare aims to be an efficient purchaser of health care services, CMS has stated that the agency currently lacks the authority to equalize payment rates between settings. Further, CMS lacks the authority to return the associated savings to the Medicare program. Until the disparity in payment rates for E/M office visits is addressed, Medicare could be expending more resources than is necessary. In order to prevent the shift of services from physician offices to HOPDs from increasing costs for the Medicare program and beneficiaries, Congress should consider directing the Secretary of HHS to equalize payment rates between settings for E/M office visits—and other services that the Secretary deems appropriate—and to return the associated savings to the Medicare program. HHS provided technical comments on a draft of this report, which we incorporated where appropriate. In addition, we provided two organizations—the American Medical Association and AHA—the opportunity to review our draft because these organizations represent the types of providers and care settings that were the main focus of our report. The American Medical Association had no comments. AHA did not comment on the main finding of our report—that higher levels of vertical consolidation were associated with more E/M office visits being performed in HOPDs instead of physician offices. Further, AHA noted several reasons why, in their opinion, a service performed in an HOPD should receive a higher Medicare reimbursement compared to when the same service is performed in other settings. AHA did comment on two specific aspects of our report—our characterization of beneficiary health status and reasons why vertical consolidation occurs. A summary of these comments and our response are below. AHA gave several reasons why a service performed in an HOPD should receive a higher Medicare reimbursement compared to when the same service is performed in other settings, such as physician offices. For example, AHA commented that HOPD payment rates are based on audited cost reports and should not be based on physician payment rates. We acknowledge that it might be inappropriate to equalize the total Medicare payment rate for all services. However, Medicare aims to be a prudent purchaser of health care services, and that goal is not achieved if Medicare’s total payment rate for certain services—such as E/M office visits—is substantially higher simply because hospitals have acquired physician practices. Other entities such as MedPAC have also suggested that Medicare base its payments for services on the lowest cost, clinically appropriate setting. AHA stated that it disagreed with what it interpreted our report to show— that overall, patients treated at HOPDs are not sicker than those treated at physician offices. Our report does not make such an assertion, but does include our finding that beneficiaries residing in counties with higher levels of vertical consolidation were not sicker, on average, than beneficiaries residing in counties with lower levels of consolidation. Given that counties with higher levels of vertical consolidation had more E/M office visits performed in HOPDs, our evidence suggests that areas with higher E/M office visit utilization in HOPDs were not composed of sicker than average beneficiaries. AHA commented that vertical integration—what our report terms vertical consolidation—is an essential ingredient for successful implementation of the Patient Protection and Affordable Care Act and that we failed to adequately account for reasons other than payment differentials that drive vertical consolidation. Our report notes multiple reasons, identified by the researchers and industry experts we interviewed, as to why hospitals and physicians might vertically consolidate. These potential reasons include certain payment and delivery changes associated with the Patient Protection and Affordable Care Act. While we identified multiple factors that may be contributing to increases in vertical consolidation, a full analysis of the causes or the appropriateness of vertical consolidation between hospitals and physicians was outside the scope of our work. We are sending copies of this report to the appropriate congressional committees, the Secretary of HHS, and the CMS administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix describes the scope and methodology used to examine our two objectives: (1) trends in vertical consolidation between physicians and hospitals from 2007 through 2013 and (2) the extent to which higher levels of vertical consolidation were associated with more evaluation & management (E/M) office visits being performed in hospital outpatient departments (HOPD) instead of physician offices from 2007 through 2013. To examine trends in vertical consolidation between hospitals and physicians, we used survey data from the American Hospital Association (AHA) Annual Survey Database,TM in which hospitals report what types of relationships they have with physicians and the number of physicians in those relationships, and Medicare Provider Analysis and Review (MedPAR) files, which contain information on Medicare inpatient discharges for short-term acute care hospitals, from 2007 through 2013. First, we used MedPAR data to identify hospitals that served at least one Medicare beneficiary from 2007 through 2013. We then took that list of hospitals—which are identified using their Centers for Medicare & Medicaid Services Certification Number—and, using the AHA Annual Survey Database,TM determined whether each hospital was vertically consolidated with physicians in each year from 2007 through 2013. Similar to previous research on vertical consolidation, we considered a hospital to be vertically consolidated if it had one of three types of relationships with physicians—an integrated salary, foundation, or equity model. (See table 3 for a description of these three arrangements.) To identify the number of vertically consolidated hospitals, we counted the number of hospitals with any one of these three types of relationships. To identify the number of vertically consolidated physicians, we implemented edits to modify reported counts of vertically consolidated physicians that we believed were likely duplicative and then summed the number of physicians. We identified duplicative survey responses as those where hospitals reported more than 10 vertically consolidated physicians and also reported the same number of vertically consolidated physicians as another hospital in the same hospital system. In such instances, we assumed that the total number of vertically consolidated physicians associated with a hospital system was reported multiple times by more than one hospital. Additionally, based on a review of pertinent literature, we identified and interviewed industry representatives and academic researchers. To better understand hospitals’ perspectives on vertical consolidation, we interviewed officials from AHA. Similarly for physicians, we interviewed the American Medical Association and Medical Group Management Association. We also interviewed numerous academic researchers to better understand issues such as the various types of hospital-physician relationships, possible data sources to track vertical consolidation, and health care system policies that could be driving consolidation. To attribute E/M office visits to a given county, we used the beneficiary county of residence that was listed on the Carrier and Outpatient file claims. To determine the total number of E/M office visits that were performed in a given county, we combined the number of E/M office visits from the Carrier file and the number of E/M office visits associated with professional claims in the Medicare Outpatient file. To determine the number of E/M office visits performed in HOPDs in a given county, we summed the number of services billed in the Medicare Outpatient file, including services provided by critical access hospitals. The number of E/M office visits performed in physician offices was calculated by subtracting the number of HOPD services from the total number of services. To calculate the number of services per Medicare beneficiary in a given county, we used the Medicare Denominator file to identify fee-for- service (FFS) beneficiaries. To calculate the level of vertical consolidation in each county, we used the AHA Annual Survey DatabaseTM and MedPAR claims. First, we calculated the share of MedPAR services that were delivered by vertically consolidated hospitals in each zip code in which a beneficiary received at least one service. We then created a weighted average hospital level vertical consolidation measure using all the zip codes a hospital served in a year. Finally, we created a weighted average county level vertical consolidation measure based on the hospitals that served each county. To calculate control variables for our regression analyses, we used a similar process. Specifically, we calculated variables for profit status, public vs. private ownership, hospital size, teaching status, whether a hospital belonged to a system, and Herfindahl-Hirschman Indexes (HHI) for hospital and physician market concentration. To determine how the level of vertical consolidation in a county was associated with the setting in which E/M office visits were provided before controlling for other factors, we conducted a bivariate analysis for every year from 2007 through 2013. Specifically, we ranked counties into quintiles based on the level of consolidation in each county in 2013. In the bottom quintile were the 20 percent of counties with the lowest levels of vertical consolidation; such counties were considered to have low levels of vertical consolidation. In order, the next four quintiles were considered to have medium-low, medium, medium-high, and high levels of vertical consolidation. For 2007 through 2012, we used the same thresholds to sort counties into the five levels of consolidation. Within each of the five county groups for each year, we then calculated the 1) median and mean percentage of E/M office visits that were performed in HOPDs and physician offices and 2) the median and mean number of E/M office visits per beneficiary performed in HOPDs, physician offices, and in total. To determine whether counties with higher levels of vertical consolidation had sicker or healthier beneficiaries, we calculated descriptive statistics for beneficiaries who lived in a given county in 2013 using the Medicare denominator file. Specifically, for each county, we calculated the mean and median risk score, age, and the percentage of beneficiaries that died, had end-stage renal disease, were disabled, and were dually eligible for Medicare and Medicaid. Similar to the bivariate analysis described above, we then we ranked counties into quintiles based on the level of vertical consolidation in 2013. Within the quintiles, we calculated the median and mean values for each of the variables. We developed an econometric model to analyze the effect of vertical consolidation on the setting where beneficiaries received E/M office visits from 2007 through 2013. Specifically, we analyzed how the level of vertical consolidation affected 1) the percentage of E/M office visits performed in HOPDs, 2) the number of E/M office visits performed in HOPDs per beneficiary, and 3) the total number of E/M office visits per beneficiary. Our analysis used data for 3,121 U.S. counties from 2007 through 2013. Yit=log (rit/(1−rit)) Where rit represents the proportion of E/M office visits that were provided in an HOPD, and the i and t subscripts represent the county and year, respectively. This formulation has the advantage of allowing the dependent variable to range over all values for any value of r between zero and one. For our models analyzing the number of E/M office visits performed in HOPDs per beneficiary and the total number of E/M office visits per beneficiary, our dependent variables were the logarithm of the number of services per beneficiary. Our key explanatory variable was the level of vertical consolidation. Our hypothesis was that higher levels of vertical consolidation would be associated with a higher percentage and number of E/M office visits being performed in HOPDs. Our model controlled for horizontal physician and horizontal hospital concentration, using HHIs. We hypothesized that greater concentration of market power among physicians would lead to E/M office visits being provided in physician offices rather than HOPDs, all else being equal. In contrast, we hypothesized that greater concentration of market power among hospitals would lead to E/M office visits being provided in HOPDs rather than physician offices, all else being equal. Our model included hospital characteristic variables to account for possible differences in hospital size and institutional arrangements. Specifically, our model included variables for the following hospital characteristics: profit status, public vs. private ownership, hospital size, teaching status, and whether a hospital belonged to a system. Our model included time fixed effects (a dummy variable for each year in the analysis). In addition, we included county fixed effects (a dummy variable for each of the 3,121 counties in the analysis). These county fixed effects assist in controlling for unobserved heterogeneity. The regression analysis used a panel data model for 3,121 U.S. counties for the years 2007 through 2013 as follows: Yit is the dependent variable for county i in year t. For the model analyzing the percentage of E/M office visits performed in HOPDs, the dependent variable is the logit transformation of the percentage of is the percentage of E/M office visits in an HOPD. For our models analyzing the number of E/M office visits performed in HOPDs per beneficiary and the total number of E/M office visits per beneficiary, services in an HOPD setting—that is, Yit=log (rit/(1−rit)), where ri t, Yit=log (sit), where sit, is the number of services per Medicare beneficiary. c is a fixed effect or dummy variable for county i. fis a fixed effect or dummy variable for year t. are the hospital-characteristic variables and market structure variables, such as horizontal physician HHI, horizontal hospital HHI, and vertical consolidation, associated with county i at time t, and α are the parameters associated with each of these variables. εit are the error terms. We used xtivreg2 in STATA to estimate our models. Our parameter estimates are consistent given the assumptions of our model. Our standard errors are robust to heteroskedasticity and clustering at the county level. The hospital characteristics, the horizontal hospital HHI, and the vertical consolidation measures were calculated using MedPAR data, while the dependent variable was calculated using Outpatient and Carrier file data. This separation reduced the likelihood that these market characteristics were correlated with unobserved determinants of the setting where beneficiaries received E/M office visits. However, the physician HHI measure was calculated using Carrier file data, so we tested this variable for endogeneity. Our study has some limitations. While the response rate for the AHA Annual Survey DatabaseTM was high for each year—about 76 percent— the data on vertical consolidation was self-reported by hospitals. In the process of examining the AHA Annual Survey Database,TM we identified responses that we believe were likely duplicative. However, our ability to identify and fix duplicative responses is limited because we were not able to directly contact survey respondents based on our data licensing agreement. Second, because the AHA Annual Survey DatabaseTM does not contain identifying information for vertically consolidated physicians, we used hospital inpatient markets to proxy vertically consolidated physician markets. Although this is a limitation, we conducted a sensitivity analysis with HOPD markets, and our results held. Further, we believe there are several reasons why vertically consolidated physician markets should substantially overlap with hospital inpatient markets. For example, physician practices generally must be located within 35 miles of its parent hospital to bill as an HOPD, and many payment reforms—such as accountable care organizations, bundled payments, and Medicare’s Hospital Readmissions Reduction Program—reward hospitals for managing their patients across inpatient and outpatient settings. Third, vertically consolidated hospitals varied widely in terms of the number of vertically consolidated physicians associated with them. While our bivariate and regression analyses only consider a hospital vertically consolidated if it has more than 10 vertically consolidated physicians, we were unable to make our measure of vertical consolidation reflect the intensity of vertical consolidation relationships—that is, the number of vertically consolidated physicians per hospital—because of data limitations. Finally, time lags may occur between vertical consolidation and our measures of how often E/M office visits are performed in an HOPD. A hospital can purchase physician practices and not convert them to HOPDs immediately or ever. Consequently, these lags may be long and variable, and we have no systematic data to measure the timing of these possible effects. We took several steps to ensure that the data used to produce this report were sufficiently reliable. Specifically, we assessed the reliability of the Centers for Medicare & Medicaid Services data and the AHA Annual Survey DatabaseTM we used by interviewing officials responsible for overseeing these data sources. We also reviewed relevant documentation and examined the data for obvious errors, such as missing values and values outside of expected ranges. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from February 2014 through December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides more detailed results for the models we used to analyze the effect of vertical consolidation on the setting where beneficiaries received E/M office visits from 2007 through 2013. Counties with higher levels of vertical consolidation were significantly more likely to have a higher proportion of their E/M office visits performed in HOPDs. These counties also had a significantly higher rate of utilization of E/M office visits in HOPDs. However, those same counties also had a significantly lower rate of overall utilization of E/M office visits, although the size of this negative association was smaller. Specifically, all else being equal, our models predict that a county with no vertical consolidation going to completely consolidated would experience: an increase in the percent of E/M office visits performed in HOPDs of 2.7 percentage points on average; an increase in the number of E/M office visits per beneficiary being performed in HOPDs of approximately 30 percent on average; and a decrease in the total number of E/M office visits per beneficiary of less than 2 percent on average. We used a set of medical service supply variables from the Area Health Resource Files as instruments: the number of federal and non- federal active MDs as a percentage of the total population, total hospital beds per capita, and whether the area was designated as a health care professional shortage area for primary care physicians. In our models of the percentage of E/M office visits performed in HOPDs and total number of E/M office visits per beneficiary, the C-test accepted the null hypothesis of exogeneity of the physician horizontal Herfindahl-Hirschman Index (HHI) variable, and the Hansen J-statistic accepted the null hypothesis that our instruments were valid. The Sanderson-Windmeijer test also supported our use of these instruments, by rejecting the null hypothesis of weak instruments. In our model of the number of E/M office visits performed in HOPDs, the Hansen J-statistic accepted the null hypothesis that our instruments were valid, and the Sanderson-Windmeijer test rejected the null hypothesis of weak instruments. However, the C- test rejected the null hypothesis of exogeneity of the physician horizontal HHI variable, so we report our instrumental variable estimates for our log of utilization of E/M office visits performed in HOPDs. A full set of results is provided in table 4. The percentage of E/M office visits—as well as the number of E/M office visits per 100 beneficiaries—performed in HOPDs was generally higher in counties with higher levels of vertical consolidation from 2007 through 2013 (see tables 5 - 11). To examine whether vertical consolidation affected total utilization, we examined the association between vertical consolidation in a county and the total number of evaluation & management (E/M) office visits per beneficiary and found mixed results. Specifically, while counties with the lowest level of vertical consolidation had higher total utilization of E/M office visits compared to counties with the highest levels of vertical consolidation, total utilization of E/M office visits neither increases nor decreases consistently as the level of vertical consolidation increases in a county in our bivariate analysis. For example, in 2013, the median number of total E/M office visits per 100 beneficiaries decreased from 658 among the counties with the lowest levels of vertical consolidation to 580 among counties with a medium level of vertical consolidation; however, among counties with high levels of vertical consolidation, the number increased to 601. Furthermore, unlike our results examining the setting in which E/M office visits were performed, our results changed when we tested an alternative measure of vertical consolidation. For example, using the alternative specification, the median number of total E/M office visits per 100 beneficiaries in counties with the highest level of vertical consolidation was at least 10 services per 100 beneficiaries higher than in counties with the lowest level of consolidation in 4 out of 7 years from 2007 through 2013. In addition to the contact above, Jessica Farb, Assistant Director; Todd Anderson; Krister Friday; Michael Kendix; Richard Lipinski; Brian O’Donnell; Dan Ries; Said Sariolghalam; Eric Wedum; and Jennifer Whitworth made key contributions to this report.
Medicare expenditures for HOPD services have grown rapidly in recent years. Some policymakers have raised questions about whether this growth may be attributed to services that were typically performed in physician offices shifting to HOPDs. GAO was asked to examine trends in vertical consolidation and its effects on Medicare. This report examines, for years 2007 through 2013, (1) trends in vertical consolidation between hospitals and physicians and (2) the extent to which higher levels of vertical consolidation were associated with more E/M office visits being performed in HOPDs. GAO analyzed, using various methods including regression analyses, the most recent available claims data from CMS and survey data from the American Hospital Association, in which hospitals report the types of financial arrangements they have with physicians. Vertical consolidation is a financial arrangement that occurs when a hospital acquires a physician practice and/or hires physicians to work as salaried employees. The number of vertically consolidated hospitals and physicians increased from 2007 through 2013. Specifically, the number of vertically consolidated hospitals increased from about 1,400 to 1,700, while the number of vertically consolidated physicians nearly doubled from about 96,000 to 182,000. This growth occurred across all regions and hospital sizes, but was more rapid in recent years. After hospitals and physicians vertically consolidate, services performed in physician offices, such as evaluation & management (E/M) office visits, can be classified as being performed in hospital outpatient departments (HOPD). Medicare often pays providers at a higher rate when the same service is performed in an HOPD rather than in a physician office. For example, in 2013, the total Medicare payment rate for a mid-level E/M office visit for an established patient was $51 higher when the service was performed in an HOPD instead of a physician office. The percentage of E/M office visits—as well as the number of E/M office visits per beneficiary—performed in HOPDs, rather than in physician offices, was generally higher in counties with higher levels of vertical consolidation in 2007 through 2013. For example, the median percentage of E/M office visits performed in HOPDs in counties with the lowest levels of vertical consolidation was 4.1 percent in 2013. In contrast, this rate was 14.1 percent for counties with the highest levels of consolidation. GAO's findings suggest that Medicare will likely pay more than necessary for E/M office visits. Such excess payments are inconsistent with Medicare's role as an efficient purchaser of health care services. However, the Centers for Medicare & Medicaid Services (CMS)—the agency that is responsible for the Medicare program—lacks the statutory authority to equalize total payment rates between HOPDs and physician offices and achieve Medicare savings. In order to prevent the shift of services from lower paid settings to the higher paid HOPD setting from increasing costs for the Medicare program and beneficiaries, Congress should consider directing the Secretary of the Department of Health and Human Services (HHS) to equalize payment rates between settings for E/M office visits—and other services that the Secretary deems appropriate—and to return the associated savings to the Medicare program. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Our review of medical records for a sample of newly enrolled veterans at six VA medical centers found several problems in medical centers’ processing of veterans’ requests that VA contact them to schedule appointments, and thus not all newly enrolled veterans were able to access primary care. For the 60 newly enrolled veterans in our review who requested care but had not been seen by primary care providers, we found that 29 did not receive appointments due to the following problems in the appointment scheduling process: Veterans did not appear on VHA’s New Enrollee Appointment Request (NEAR) list. We found that although 17 newly enrolled veterans in our review requested that VA contact them to schedule appointments, medical center officials said that schedulers did not contact the veterans because they had not appeared on the NEAR list. According to VHA policy, as outlined in its July 2014 interim scheduling guidance, VA medical center staff should contact newly enrolled veterans to schedule appointments within 7 days from the date they were placed on the NEAR list. Medical center officials were not aware that this problem was occurring, and could not definitively tell us why these veterans never appeared on the NEAR list. VA medical center staff did not follow VHA scheduling policy. We found that VA medical centers did not follow VHA policies for contacting newly enrolled veterans for 12 veterans in our review. VHA policy states that medical centers should document three attempts to contact each newly enrolled veteran by phone, and if unsuccessful, send the veteran a letter. However, for 5 of 12 newly enrolled veterans, our review of their medical records revealed no attempts to contact them, and medical center officials could not tell us whether the veterans had ever been contacted to schedule appointments. Medical center staff attempted to contact the other 7 veterans at least once each, but failed to reach out to them with the frequency required by VHA policy. For the remaining 31 of 60 newly enrolled veterans included in our review who did not have a primary care appointment: 24 were unable to be contacted to schedule appointments or upon contact, declined care, according to VA medical center officials. These officials said that in some cases they were unable to contact veterans due to incorrect or incomplete contact information in veterans’ enrollment applications; in other cases, they said veterans were seeking a VA identification card, for example, and did not want to be seen by a provider at the time they were contacted. 7 had appointments scheduled but had not been seen by primary care providers at the time of our review. Four of those veterans had initial appointments that needed to be rescheduled, which had not yet been done at the time of our review. Appointments for the remaining 3 veterans were scheduled after VHA provided us with a list of veterans who had requested care. For the 120 newly enrolled veterans across the six VA medical centers in our review who requested care and were seen by primary care providers, we found the average number of days between newly enrolled veterans’ initial requests that VA contact them to schedule appointments and the dates the veterans were seen by primary care providers ranged from 22 days to 71 days. Slightly more than half of the 120 veterans in our sample were seen by providers in less than 30 days; however, veterans’ experiences varied widely, even within the same medical center, and 12 of the 120 veterans in our review waited more than 90 days to be seen by a provider. We found that two factors generally impacted newly enrolled veterans’ experiences regarding the number of days it took to be seen by primary care providers: 1. Appointments were not always available when veterans wanted to be seen, which contributed to delays in receiving care. For example, one veteran was contacted within 7 days of being placed on the NEAR list, but no appointment was available until 73 days after the veteran’s preferred appointment date, and a total of 94 days elapsed before the veteran was seen by a provider. In another example, a veteran wanted to be seen as soon as possible, but no appointment was available for 63 days. Officials at each of the six medical centers in our review told us that they have difficulty keeping up with the demand for primary care appointments for new patients because of shortages in the number of providers, or lack of space due to rapid growth in the demand for these services. 2. Weaknesses in VA medical center scheduling practices may have impacted the amount of time it took for veterans to see primary care providers and contributed to unnecessary delays. Staff at the medical centers in our review did not always contact veterans to schedule appointments in accordance with VHA policy, which states that attempts to contact newly enrolled veterans to schedule appointments must be made within 7 days of their addition to the NEAR list. Among the 120 veterans included in our review that were seen by primary care providers, 37 (31 percent) were not contacted within 7 days to schedule an appointment; compliance varied across medical centers. As a result of these findings, we recommended that VHA review its processes for identifying and documenting newly enrolled veterans requesting appointments and revise as appropriate, to ensure that all veterans requesting appointments are contacted in a timely manner to schedule them. VHA concurred with this recommendation, and indicated that by December 31, 2016, it plans to review and revise the process from enrollment to scheduling to ensure that newly enrolled veterans requesting appointments are contacted in a timely manner. VHA also indicated that it will implement internal controls to ensure its medical centers are appropriately implementing the process. VHA’s oversight of veterans’ access to primary care is hindered, in part, by data weaknesses and the lack of a comprehensive scheduling policy, both of which are inconsistent with federal internal control standards. These standards call for agencies to have reliable data and effective policies to achieve their objectives, and for information to be recorded and communicated to the entity’s management and others who need it to carry out their responsibilities. A key component of VHA’s oversight of veterans’ access to primary care, particularly for newly enrolled veterans, relies on monitoring appointment wait times. However, VHA monitors only a portion of the overall time it takes newly enrolled veterans to access primary care. For newly enrolled veterans, VHA calculates primary care appointment wait times starting from veterans’ preferred dates, rather than the dates veterans initially requested that VA contact them to schedule appointments. (A preferred date is the date that is established when a scheduler contacts the veteran to determine when he or she wants to be seen.) Therefore, these data do not capture the time veterans wait prior to being contacted by schedulers, making it difficult for officials to identify and remedy scheduling problems that may arise prior to making contact with veterans. (See fig. 1.) Our review of medical records for 120 newly enrolled veterans found that, on average, the total amount of time it took to be seen by primary care providers was much longer when measured from the dates veterans initially requested VA contact them to schedule appointments than it was when using appointment wait times calculated using veterans’ preferred dates as the starting point. For example, we found one veteran applied for VHA health care benefits in December 2014, which included a request to be contacted for an initial appointment. The VA medical center contacted the veteran to schedule a primary care appointment 43 days later. When making the appointment, the medical center recorded the veteran’s preferred date as March 1, 2015, and the veteran saw a provider on March 3, 2015. Although the medical center’s data showed the veteran waited 2 days to see a provider, the total amount of time that elapsed from the veteran’s request until the veteran was seen was actually 76 days. Further, ongoing scheduling errors, such as incorrectly revising preferred dates when rescheduling appointments, understated the amount of time veterans waited to see providers. For example, during our review of appointment scheduling for 120 newly enrolled veterans, we found that schedulers in three of the six VA medical centers included in our review had made errors in recording veterans’ preferred dates when making appointments. For example, in some cases primary care clinics cancelled appointments, and when those appointments were re-scheduled, schedulers did not always maintain the original preferred dates in the system, but updated them to reflect new preferred dates recorded when the appointments were rescheduled. We found 15 appointments for which schedulers had incorrectly revised the preferred dates. In these cases, we recalculated the appointment wait time based on what should have been the correct preferred dates, according to VHA policy, and found the wait- time data contained in the scheduling system were understated. Officials attributed these errors to confusion by schedulers resulting from the lack of an updated standardized scheduling directive, which VHA rescinded and replaced with an interim directive in July 2014. As in our previous work, we continue to find scheduling errors that affect the reliability of wait-time data used for oversight, which make it difficult to effectively oversee newly enrolled veterans’ access to primary care. As a result of these findings, we recommended that VHA monitor the full amount of time newly enrolled veterans wait to receive primary care, and issue an updated scheduling directive. VHA concurred with both of these recommendations, and indicated that by December 31, 2016, it plans to begin monitoring the full amount of time newly enrolled veterans wait to be seen by primary care providers. It also indicated that it plans to submit a revised scheduling directive for VHA-wide internal review by May 1, 2016. This most recent work on veterans’ access to primary care expands further the litany of VA health care deficiencies and weaknesses that we have identified over the years, particularly since 2010. As of April 1, 2016, there were about 90 GAO recommendations regarding veterans’ health care awaiting action by VHA. These include more than a dozen recommendations to address weaknesses in the provision and oversight of veterans’ access to timely primary and specialty care, including mental health care. Until VHA can make meaningful progress in addressing these and other recommendations, which underscore a system in need of major transformation, the quality and safety of health care for our nation’s veterans is at risk. Chairman Miller, Ranking Member Brown, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact Debra A. Draper at (202) 512-7114 or draper@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions are Janina Austin, Assistant Director; Jennie F. Apter; Emily Binek; David Lichtenfeld; Vikki L. Porter; Brienne Tierney; and Emily Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's March 2016 report, entitled VA Health Care: Actions Needed to Improve Newly Enrolled Veterans' Access to Primary Care , GAO-16-328 . GAO found that not all newly enrolled veterans were able to access primary care from the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA), and others experienced wide variation in the amount of time they waited for care. Sixty of the 180 newly enrolled veterans in GAO's review had not been seen by providers at the time of the review; nearly half were unable to access primary care because VA medical center staff did not schedule appointments for these veterans in accordance with VHA policy. The 120 newly enrolled veterans in GAO's review who were seen by providers waited from 22 days to 71 days from their requests that VA contact them to schedule appointments to when they were seen, according to GAO's analysis. These time frames were impacted by limited appointment availability and weaknesses in medical center scheduling practices, which contributed to unnecessary delays. VHA's oversight of veterans' access to primary care is hindered, in part, by data weaknesses and the lack of a comprehensive scheduling policy. This is inconsistent with federal internal control standards, which call for agencies to have reliable data and effective policies to achieve their objectives. For newly enrolled veterans, VHA calculates primary care appointment wait times starting from the veterans' preferred dates (the dates veterans want to be seen), rather than the dates veterans initially requested VA contact them to schedule appointments. Therefore, these data do not capture the time these veterans wait prior to being contacted by schedulers, making it difficult for officials to identify and remedy scheduling problems that arise prior to making contact with veterans. Further, ongoing scheduling errors, such as incorrectly revising preferred dates when rescheduling appointments, understated the amount of time veterans waited to see providers. Officials attributed these errors to confusion by schedulers, resulting from the lack of an updated standardized scheduling policy. These errors continue to affect the reliability of wait-time data used for oversight, which makes it more difficult to effectively oversee newly enrolled veterans' access to primary care.
The tens of thousands of individuals who responded to the September 11, 2001, attack on the WTC experienced the emotional trauma of the disaster and were exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. A wide variety of health effects have been experienced by responders to the WTC attack, including injuries and respiratory conditions such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Commonly reported mental health effects among responders and other affected individuals included symptoms associated with post-traumatic stress disorder, depression, and anxiety. Behavioral health effects such as alcohol and tobacco use have also been reported. There are six key programs that currently receive federal funding to provide voluntary health screening, monitoring, or treatment at no cost to responders. The six WTC health programs, shown in table 1, are (1) the FDNY WTC Medical Monitoring and Treatment Program; (2) the New York/New Jersey (NY/NJ) WTC Consortium, which comprises five clinical centers in the NY/NJ area; (3) the WTC Federal Responder Screening Program; (4) the WTC Health Registry; (5) Project COPE; and (6) the Police Organization Providing Peer Assistance (POPPA) program. The programs vary in aspects such as the HHS administering agency or component responsible for administering the funding; the implementing agency, component, or organization responsible for providing program services; eligibility requirements; and services. The WTC health programs that are providing screening and monitoring are tracking thousands of individuals who were affected by the WTC disaster. As of June 2007, the FDNY WTC program had screened about 14,500 responders and had conducted follow-up examinations for about 13,500 of these responders, while the NY/NJ WTC Consortium had screened about 20,000 responders and had conducted follow-up examinations for about 8,000 of these responders. Some of the responders include nonfederal responders residing outside the NYC metropolitan area. As of June 2007, the WTC Federal Responder Screening Program had screened 1,305 federal responders and referred 281 responders for employee assistance program services or specialty diagnostic services. In addition, the WTC Health Registry, a monitoring program that consists of periodic surveys of self-reported health status and related studies but does not provide in- person screening or monitoring, collected baseline health data from over 71,000 people who enrolled in the registry. In the winter of 2006, the registry began its first adult follow-up survey, and as of June 2007 over 36,000 individuals had completed the follow-up survey. In addition to providing medical examinations, FDNY’s WTC program and the NY/NJ WTC Consortium have collected information for use in scientific research to better understand the health effects of the WTC attack and other disasters. The WTC Health Registry is also collecting information to assess the long-term public health consequences of the disaster. In February 2006, the Secretary of HHS designated the Director of NIOSH to take the lead in ensuring that the WTC health programs are well coordinated, and in September 2006 the Secretary established the WTC Task Force to advise him on federal policies and funding issues related to responders’ health conditions. The chair of the task force is HHS’s Assistant Secretary for Health, and the vice chair is the Director of NIOSH. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area, although it has taken steps toward expanding the availability of these services. Initially, NIOSH made two efforts to provide screening and monitoring services for these responders, the exact number of whom is unknown. The first effort began in late 2002 when NIOSH awarded a contract for about $306,000 to the Mount Sinai School of Medicine to provide screening services for nonfederal responders residing outside the NYC metropolitan area and directed it to establish a subcontract with AOEC. AOEC then subcontracted with 32 of its member clinics across the country to provide screening services. From February 2003 to July 2004, the 32 AOEC member clinics screened 588 nonfederal responders nationwide. AOEC experienced challenges in providing these screening services. For example, many nonfederal responders did not enroll in the program because they did not live near an AOEC clinic, and the administration of the program required substantial coordination among AOEC, AOEC member clinics, and Mount Sinai. Mount Sinai’s subcontract with AOEC ended in July 2004, and from August 2004 until June 2005 NIOSH did not fund any organization to provide services to nonfederal responders outside the NYC metropolitan area. During this period, NIOSH focused on providing screening and monitoring services for nonfederal responders in the NYC metropolitan area. In June 2005, NIOSH began its second effort by awarding $776,000 to the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide both screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. In June 2006, NIOSH awarded an additional $788,000 to DCC to provide screening and monitoring services for these responders. NIOSH officials told us that they assigned DCC the task of providing screening and monitoring services to nonfederal responders outside the NYC metropolitan area because the task was consistent with DCC’s responsibilities for the NY/NJ WTC Consortium, which include data monitoring and coordination. DCC, however, had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country—ultimately contracting with only 10 clinics in seven states to provide screening and monitoring services. DCC officials said that as of June 2007 the 10 clinics were monitoring 180 responders. In early 2006, NIOSH began exploring how to establish a national program that would expand the network of providers to provide screening and monitoring services, as well as treatment services, for nonfederal responders residing outside the NYC metropolitan area. According to NIOSH, there have been several challenges involved in expanding a network of providers to screen and monitor nonfederal responders nationwide. These include establishing contracts with clinics that have the occupational health expertise to provide services nationwide, establishing patient data transfer systems that comply with applicable privacy laws, navigating the institutional review board process for a large provider network, and establishing payment systems with clinics participating in a national network of providers. On March 15, 2007, NIOSH issued a formal request for information from organizations that have an interest in and the capability of developing a national program for responders residing outside the NYC metropolitan area. In this request, NIOSH described the scope of a national program as offering screening, monitoring, and treatment services to about 3,000 nonfederal responders through a national network of occupational health facilities. NIOSH also specified that the program’s facilities should be located within reasonable driving distance to responders and that participating facilities must provide copies of examination records to DCC. In May 2007, NIOSH approved a request from DCC to redirect about $125,000 from the June 2006 award to establish a contract with a company to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. Subsequently, DCC contracted with QTC Management, Inc., one of the four organizations that had responded to NIOSH’s request for information. DCC’s contract with QTC does not include treatment services, and NIOSH officials are still exploring how to provide and pay for treatment services for nonfederal responders residing outside the NYC metropolitan area. QTC has a network of providers in all 50 states and the District of Columbia and can use internal medicine and occupational medicine doctors in its network to provide these services. In addition, DCC and QTC have agreed that QTC will identify and subcontract with providers outside of its network to screen and monitor nonfederal responders who do not reside within 25 miles of a QTC provider. In June 2007, NIOSH awarded $800,600 to DCC for coordinating the provision of screening and monitoring examinations, and QTC was to receive a portion of this award from DCC to provide about 1,000 screening and monitoring examinations through May 2008. According to a NIOSH official, QTC’s providers began conducting screening examinations in summer 2007. Screening and monitoring the health of the people who responded to the September 11, 2001, attack on the World Trade Center are critical for identifying health effects already experienced by responders or those that may emerge in the future. In addition, collecting and analyzing information produced by screening and monitoring responders can give health care providers information that could help them better diagnose and treat responders and others who experience similar health effects. While many responders have been able to obtain screening and follow-up physical and mental health examinations through the federally funded WTC health programs, other responders may not always find these services available. Specifically, many responders who reside outside the NYC metropolitan area have not been able to obtain screening and monitoring services because available services are too distant. Moreover, HHS has repeatedly interrupted its efforts to provide services outside the NYC area, resulting in periods when no such services were available. HHS continues to fund and coordinate the WTC health programs and has key federal responsibility for ensuring the availability of services to responders. HHS and its agencies have taken steps to move toward providing screening and monitoring services to nonfederal responders living outside of the NYC area. However, these efforts are not complete, and the stop-and-start history of the department’s efforts to serve these responders does not provide assurance that the latest efforts to extend screening and monitoring services to them will be successful and will be sustained over time. Therefore we recommended in July 2007 that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the attack on the WTC, regardless of where they reside. As of January 2008, the department has not responded to this recommendation. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; Hernan Bozzolo; Frederick Caison; Anne Dievler; Anne Hopewell; and Roseanne Price made key contributions to this statement. September 11: Improvements Needed in Availability of Health Screening and Monitoring Services for Responders. GAO-07-1229T. Washington, D.C.: September 10, 2007. September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders. GAO-07-892. Washington, D.C.: July 23, 2007. September 11: HHS Has Screened Additional Federal Responders for World Trade Center Health Effects, but Plans for Awarding Funds for Treatment Are Incomplete. GAO-06-1092T. Washington, D.C.: September 8, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Program for Federal Responders Lags Behind. GAO-06-481T. Washington, D.C.: February 28, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Not for Federal Responders. GAO-05-1020T. Washington, D.C.: September 10, 2005. September 11: Health Effects in the Aftermath of the World Trade Center Attack. GAO-04-1068T. Washington, D.C.: September 8, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Six years after the attack on the World Trade Center (WTC), concerns persist about health effects experienced by WTC responders and the availability of health care services for those affected. Several federally funded programs provide screening, monitoring, or treatment services to responders. GAO has previously reported on the progress made and implementation problems faced by these WTC health programs. This testimony is based primarily on GAO's testimony, September 11: Improvements Needed in Availability of Health Screening and Monitoring Services for Responders ( GAO-07-1229T , Sept. 10, 2007), which updated GAO's report, September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders ( GAO-07-892 , July 23, 2007). In this testimony, GAO discusses efforts by the Centers for Disease Control and Prevention's National Institute for Occupational Safety and Health (NIOSH) to provide services for nonfederal responders residing outside the New York City (NYC) area. For the July 2007 report, GAO reviewed program documents and interviewed Department of Health and Human Services (HHS) officials, grantees, and others. GAO updated selected information in August and September 2007 and conducted work for this statement in January 2008. In July 2007, following a reexamination of the status of the WTC health programs, GAO recommended that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the WTC attack, regardless of where they reside. As of January 2008, the department has not responded to this recommendation. As GAO testified in September 2007, NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC area, although it has taken steps toward expanding the availability of these services. In late 2002, NIOSH arranged for a network of occupational health clinics to provide screening services. This effort ended in July 2004, and until June 2005 NIOSH did not fund screening or monitoring services for nonfederal responders outside the NYC area. In June 2005, NIOSH funded the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide screening and monitoring services; however, DCC had difficulty establishing a nationwide network of providers and contracted with only 10 clinics in seven states. In 2006, NIOSH began to explore other options for providing these services, and in 2007 it took steps toward expanding the provider network.
A special education dispute may involve a variety of issues. According to an Education study published in 2011, the most common topics of disputes were (1) whether schools were providing an appropriate educational environment for certain students; (2) whether schools carried out the education programs as set forth in the IEP; (3) the types of special education and related services, if any, specific children needed; and (4) children’s eligibility for IDEA services and whether eligibility determinations were properly made. A range of methods exists to resolve special education disputes, ranging from formal hearings and state complaint procedures to less formal, alternative methods. IDEA and its implementing regulations have long required states to provide two formal methods—due process hearings and state complaint resolutions—for resolving disputes between parents and school districts. Although both methods provide avenues for resolving such disputes, these processes differ with respect to who can file each type of complaint, subject matter, timing, procedures, and appeal processes. IDEA provides that parents and school districts have the right to file a due process complaint notice to request a due process hearing on any matter relating to the identification, evaluation, or educational placement of a child, or the provision of a free appropriate public education to a child with a disability. For example, a parent might file a due process complaint over whether a school district is using the appropriate instructional methods for a child. After filing a complaint but prior to holding a hearing, IDEA requires parties to a dispute to attend a resolution meeting where parents discuss their complaint and the facts that form the basis of the complaint and the LEA is given the opportunity to resolve the complaint, unless the parent and the LEA agree in writing to waive the meeting or use IDEA’s mediation process. The purpose of the resolution meeting is to achieve a prompt and early resolution of a parent’s due process complaint to avoid a more costly and adversarial due process hearing and the potential for civil litigation. If the parties reach an agreement at the resolution meeting, then a due process hearing is not necessary. If the parties do not reach an agreement or choose to waive this meeting, a due process hearing is held. A due process hearing is an administrative proceeding in which an impartial hearing officer receives evidence, provides for the examination and cross-examination of witnesses by each party, and then issues a report of findings of fact and a decision. Either party can appeal a hearing officer’s decision in any state court of competent jurisdiction or in federal court without regard to the amount in controversy. Education’s regulations pertaining to state complaint procedures permit parents and organizations and individuals, including those from another state, to file a complaint with the SEA alleging that a public agency has violated a requirement of IDEA, Part B. This differs from a due process complaint in part because, while only parents and public agencies can file due process complaints, any organization or individual, including one from another state, may file a written state complaint. Once the complaint has been filed, the SEA must carry out an independent on-site investigation if the SEA determines that an investigation is necessary.must then make a determination and issue a written decision that may include specific procedures for implementation of its decision. In contrast to due process procedures, parties cannot file an appeal in state or federal court. See figure 1 for a comparison of the steps involved in due process and state complaint process under IDEA. Education established specific timelines for issuing decisions resulting from due process hearings and state complaint resolutions and set terms by which these timelines can be extended (see table 1). A variety of alternative dispute resolution methods exist that provide opportunities for parties to resolve disputes prior to due process hearings or state complaint resolutions. These include two methods states are required to provide under IDEA—mediation and resolution meetings—as well as others that states have voluntarily implemented. Either a parent or a school district can initiate the mediation process, which must be voluntary for each party. Mediations are conducted by a qualified and impartial individual who is trained in effective mediation techniques and knowledgeable in laws and regulations about special education and related services. If the parties reach an agreement through the mediation process, they must execute a signed, written agreement. According to Education the agreement is enforceable in any state or federal district court or by the SEA if the state has other procedures that permit parties to seek enforcement of mediation or resolution agreements. Resolution meetings allow parents and districts an opportunity to resolve a dispute without due process hearings by providing an opportunity for them to discuss the due process complaint and the facts that form the basis of that complaint without necessarily having attorneys present.Similar to mediation, if the parties reach agreement in a resolution meeting, they must execute a signed written agreement that is enforceable in state or federal court. Alternative methods that states have voluntarily developed and implemented are generally meant to help to facilitate early resolution of disputes before they proceed to a due process hearing and to preserve relationships between families and educators. Examples of early resolution practices include educator training in conflict resolution, which is designed to equip individuals with skills to better communicate and negotiate their positions and interests, and facilitated IEP meetings in which a facilitator helps keep members of the IEP team focused on the development of the IEP while addressing conflicts and disagreements that may arise during the meeting. To ensure states comply with the requirements of their IDEA grants, Education’s Office of Special Education Programs (OSEP) conducts a variety of activities to oversee and assist them, including monitoring states’ performance on a variety of indicators. We have previously reported that agencies need to have performance measures that demonstrate results, are limited to a vital few, cover multiple priorities, and provide useful information for decision making in order to track how their programs and activities can contribute to attaining the agency’s goals and mission. Further, past GAO work has shown that agencies successful in measuring performance had performance measures reflecting a range of attributes, such as clarity in how measures are stated and defined. Education uses four performance measures for dispute resolution as part of a system of performance measures to guide SEAs in their implementation of IDEA and in how they report their progress and performance to the department (see table 2). Education established a new IDEA data center to help states, school districts, and other entities build capacity for collecting high quality IDEA performance data, including dispute resolution data, and makes these data available to the public on the center’s website. Education uses a variety of tools, including analyzing states’ performance data and conducting desk audits and on-site visits to monitor states’ compliance with IDEA’s dispute resolution requirements and target technical assistance. Education has also recognized that involving parents in the education of their children with disabilities is important to preventing or mitigating disputes with school districts. In addition to data on dispute resolution, Education also requires states to provide data on the extent to which parents report that schools facilitated parent involvement to improve services and results for children with disabilities. Education provides several forms of technical assistance to help states implement informal early resolution methods to facilitate the timely resolution of disputes. For example, Education funds the National Center on Dispute Resolution in Special Education to provide states with assistance in implementing a range of dispute resolution options, including those that provide opportunities for early, less costly, and less adversarial dispute resolution. Education also funds a national network of Parent Training and Information Centers that provide parents in each state with information about their rights under IDEA and the options available to them for resolving special education disputes. Lastly, Education provides written guidance on dispute resolution procedures under IDEA. Since 2004, the nationwide rates of due process hearings—a key indicator of serious disputes between parents and school districts and a formal method for resolving disputes—have decreased substantially (see fig. 2). As shown in figure 2, this trend was largely driven by steep rate declines in New York, District of Columbia, and Puerto Rico—three locations that have relatively high rates of due process hearings. SEA representatives in these locations cited the use of mediation or resolution meetings as key among the reasons for the declines. Additionally, a New York official told us that the use of settlement agreements prior to due process hearing decisions may have also contributed to declines in hearings, while a District of Columbia official pointed to improvements in identifying students with special education needs earlier and delivering services more efficiently. Lastly, a representative for Puerto Rico told us that improvements in how the SEA handles due process complaints and the use of technology have resulted in a decline in hearings. Despite such substantial declines, due process hearings in these locations still comprised over 80 percent of due process hearings nationally in 2011-2012. For trends in the numbers of due process hearings in these locations and all other states, see figure 3. Outside of these three locations, the rate of due process hearings has remained consistently low, ranging from 1.5 hearings per 10,000 special education students in 2004-2005 to 0.7 hearings in 2011-2012. These overall low rates of due process hearings are slightly lower than observations we made over a decade ago, when we found that due process hearings occurred at a low rate of about 5 per 10,000 special Education officials told us that reducing the education students in 2000.occurrence of due process hearings was generally positive—considering that hearings can be protracted, adversarial and costly. However, they suggested that a low number of due process hearings may not necessarily indicate a lack of problems associated with delivering special education services. They suggested that dispute resolution trends should be understood in combination with other information on individual states, such as parents’ awareness of the procedural safeguards under IDEA. According to state education officials, certain types of complaints have been associated with the substantially higher rates of due process hearings in New York, District of Columbia, and Puerto Rico. For example, state education officials in these locations told us that that many due process hearings were held because parents and officials from their children’s school districts disagreed on whether to place the students in private schools. In addition, a state education official in Puerto Rico told us many due process hearings were held because parents and officials disagreed about the need to provide services related to special education, such as physical therapy or special classroom accommodation. Further, Education officials told us that higher rates of due process hearings in District of Columbia and Puerto Rico have been driven, in part, by consent decrees, which are agreements entered into by parties to a lawsuit under the supervision of a court. For example, lawsuits were initiated against District of Columbia public schools in 1997, alleging that District of Columbia failed to provide timely due process hearings and implementation of hearing officer determinations and settlement agreements. The latest consent decree was approved under this litigation in 2006 by the U.S. District Court for the District of Columbia, with one of its goals being for District of Columbia to achieve and maintain timely due process hearings. Regarding the two alternative methods states are required by IDEA to make available, the rate of mediations held decreased slightly from 2004 to 2012, and the rate of resolution meetings held more than doubled from 2005-06—when states were first required to implement them—to 2006-07 and declined slightly from 2006-07 to 2011-12 (see fig. 4). The slight overall decline in mediations may have resulted, in part, from the decrease in due process complaints filed. According to Education officials, the low rate of resolution meetings in 2005-2006 (6.9 per 10,000 students) can be explained primarily by the lack of awareness about this new requirement among school districts at that time. We found that while mediations occurred less frequently than resolution meetings in 2011-2012, mediations were more likely than resolution meetings to result in agreements. That is, over two-thirds (69 percent) of mediations resulted in agreements while less than a quarter (22 percent) of all resolution meetings resulted in agreements. These differences may be due to the fact that resolution meetings are required prior to a due process hearing, unless waived by both parties, while mediations are voluntary for the parties, and the parties may therefore be more open to agreement. In addition to mediation and resolution meetings, states and territories we surveyed reported voluntarily offering a variety of other alternative dispute resolution methods, with two-thirds (33 out of 51) of them reporting offering three or more such methods. Among the most common of these were (1) dispute resolution helplines, (2) facilitated IEP meetings, (3) facilitated resolution meetings, (4) parent-to-parent assistance, and (5) conflict resolution skills training (see fig. 5). These methods are briefly described as follows: We surveyed a total of 60 states and territories and received a 100 percent overall response rate. However, not all 60 states and territories answered every question. The total number of responses to questions related to the individual dispute resolution methods above was 59 of 60 states/territories completing the survey. Dispute resolution helplines. Dedicated staff in the SEA or through an SEA-contracted service provider available to respond to calls or e- mails from the public about dispute resolution options and procedures. For example, California reported maintaining a toll free number to allow both parents and school staff to contact them for advice. The service is provided in English and Spanish and helpline personnel may refer parents to support services such as parent centers, family empowerment centers, or technical assistance units. New York reported operating six regional offices staffed by state education personnel who provide parents and other parties with information regarding dispute resolution options and technical assistance. Facilitated IEP meetings. Facilitators who are not part of the IEP team are used when an adversarial climate exists or when an IEP meeting is expected to be particularly complex or controversial. Texas reported it promotes facilitated IEP team meetings by developing a statewide facilitated IEP meetings project to be implemented in the 2014-15 school year. Facilitated resolution meetings. Facilitators are used to help parties resolve a dispute during a resolution meeting. Michigan reported that resolution meetings are facilitated by special education attorneys and help encourage parties to resolve a dispute before it goes to a due process hearing. Parent-to-parent assistance. An SEA-supported service in which parents assist other parents and school district personnel, especially in addressing emerging or active complaints. Maryland reported it maintains family support specialists who work informally with families and school systems to resolve special education disputes. Conflict Resolution Skills Training. Training to enhance the capacity of parents and school, district, and state personnel to communicate, negotiate, and prevent conflict from evolving and becoming problematic. For example, in Iowa, the SEA conflict resolution skills training for state administrators, LEA representatives, and parents. On our survey, a large majority of state officials reported mediation and resolution meetings—methods that IDEA requires states make available—as extremely, very or moderately important to resolving disputes early. Many states also reported methods they have voluntarily implemented as extremely, very or moderately important. Some stakeholders cited the potential of these methods to improve communication and trust between parents and schools. 55 states and territories reported that mediation was extremely, very or moderately important to resolving disputes,commented on our survey or in follow up discussions that it provides parties with an opportunity to reduce tension, preserve or enhance relationships, and having a third party facilitate the discussion is beneficial. For example, an Iowa official explained that mediation can allow for more expedient dispute resolution and help to preserve or enhance relationships between parents and schools. Several officials and some officials expressed positive views about mediation and noted that there was a high likelihood that mediation resulted in agreements between parents and schools in their states. For example, officials from Rhode Island and Connecticut commented that a majority of mediations resulted in agreements in 2012-13 in their states, and one noted that most of them were reached on the day of the mediation between parents and school districts. Some state officials described on our survey and in follow up discussions difficulties they encountered in expanding the use of mediation in their state. An Oklahoma official commented that many schools are resistant to the idea of mediation before the filing of a due process complaint because of legal concerns about mediation agreements. New York and D.C. officials told us in follow up discussions that mediation is underutilized despite its availability, in part because not all parents know that mediation is available for dispute resolution but also because parents may question the independence of mediators in their state. Some parents in one state told us they were not satisfied with the competency or independence of mediators. A national advocacy organization for people with disabilities, told us its organization recommends families to pursue mediation rather than filing a due process complaint because a trained mediator can have a positive impact of bringing parents and schools together. Education’s guidance on dispute resolution similarly recognizes that the success of mediation is closely related to the mediator’s ability to obtain the trust of both parties and commitment to the process. Forty-five states and territories reported that resolution meetings are an extremely, very, or moderately important method to resolve disputes. Some officials also commented that meetings such as these give parents and the school district a chance to discuss the basis of the dispute and work together to avoid a potentially adversarial due process hearing, which can also lead to improved relationships between parents and their school districts. A few state officials cited a high number of agreements as the result of resolution meetings. For example, a West Virginia official noted that during 2012, all of their requests for due process hearings were resolved at resolution meetings, and a Rhode Island official wrote that over half of its resolution meetings resulted in written agreements in the same year. However, several state officials commented on our survey or in follow up discussions that some parties prefer to waive the resolution meeting or that by the time the resolution meeting occurs parties are already entrenched which limits the ability of parties to reach an agreement before a due process hearing. For example, a Pennsylvania official told us that often parents who file due process complaints have legal representation and generally attorneys in her state have little incentive to resolve a dispute prior to a due process hearing. Similarly, attorney members of a national organization representing school boards told us in an interview that, when parents are not represented by an attorney, most disputes are resolved before they proceed to due process; however if parents are represented by attorneys, disputes are rarely are disputes resolved before a due process hearing. Officials from an organization representing children with disabilities commented that resolution meetings are not as effective as facilitated methods where an independent third party assists parents and schools in finding a solution. IDEA does not require that resolution meetings be facilitated; however, several state officials commented on our survey and in follow up discussions that third party facilitation for resolution meetings is helpful in bringing about a resolution without resorting to a hearing. For example, an Oklahoma official commented that state officials had found facilitated resolution meetings useful to resolve disputes earlier and noted that without facilitation, parties often found it difficult to reach an agreement. When we asked survey respondents to comment on the alternative dispute resolutions they voluntarily implemented—that is, those not required by IDEA—more than half of state officials reported that their states offer dispute resolution helplines while about half offer Facilitated IEP meetings, Conflict resolution skills training, and parent-to-parent assistance (see table 3). A majority of states and territories reported that the guidance and assistance provided by CADRE—which serves as Education’s technical advisor and resource on special education dispute resolution—was extremely, very or moderately useful to their efforts to successfully implement and expand their early dispute resolution methods. For example, a Pennsylvania official told us in a follow up discussion that CADRE is the first resource they turn to for information and to obtain a national perspective on alternative dispute resolution issues, and an official from Florida commented that the technical assistance they received from CADRE was excellent. An Illinois official also reported their state frequently uses CADRE’s services, which were instrumental to the development and implementation of facilitated IEP meetings in their state. Over half of the states surveyed reported no challenge or only a slight challenge in implementing or expanding dispute resolution methods due to, for example, lack of expertise or parent or school district resistance to using such methods. In follow up discussions, some state officials said the lack of public awareness as a challenge to implementing or expanding the use of alternative dispute resolution methods they have voluntarily implemented, and that they are addressing this challenge with various strategies. For example, a Pennsylvania state official told us about the difficulty of reaching out to and educating parents in rural and highly urban areas about alternative dispute resolution methods they have voluntarily implemented because they have less access to online information, and said the state partners with parent education networks and a statewide stakeholder council of parent advocates to raise awareness. Connecticut officials added that the state communicates with various parent groups throughout the year, publicizes alternate dispute resolution methods in its special education bulletins, and disseminates informational materials among parent groups and other state agencies. A Texas official said the state offers workshops at conferences and parent meetings to raise awareness of the states’ methods for resolving disputes. Education assesses states’ performance on dispute resolution using several different measures (see table 2) but lacks key information about the timeliness of due process hearing decisions, which reduces its ability to monitor dispute resolution effectively. Under its regulations, Education requires states and school districts, where applicable, to ensure that decisions are reached in due process hearings within 45 days after the expiration of the 30-day resolution period or adjusted resolution periods.These regulations also permit a hearing officer to grant specific extensions of this 45-day timeline, at the request of either party to the hearing. According to Education’s guidance on performance measures, all states are required to report the number of due process hearing requests that were adjudicated within 45 days, or a timeline that includes any approved extensions. However, this guidance does not direct states to report the amount of time that extensions add to due process hearing decisions. Leading performance measurement practices identified in our past work state that successful performance measures should, among other things, be clearly stated and provide unambiguous information. As shown in figure 6, nearly half of all due process hearing decision timelines were extended in school year 2011-12; in California, New York, and Pennsylvania, the large majority of hearing decisions were made under extended timelines. The decisions in these three states accounted for more than 65 percent of all hearing decisions nationally. Despite the more frequent use of extensions in California and New York, in 2011 they achieved about 99 percent and 86 percent, respectively, of the 100 percent performance target that Education established for hearing decision timeliness. Education’s current performance measure creates the appearance that most hearing decisions in California and New York were timely even though extended hearings took an unknown amount of time, and no information is available about whether these extended timeframes affected the provision of services to children with disabilities. Education officials told us that, while they were aware of the use of extensions in states, they did not know how much time extensions add to hearing decisions because they do not collect this information. They stated they were not concerned specifically about the effect of extensions on the timeliness of dispute resolution because they believe extensions are generally used for appropriate reasons, such as providing additional opportunities for resolving disputes prior to a hearing, accommodating parties’ schedules, and affording parents sufficient time to resolve disputes. A range of special education stakeholders, including state education officials and officials from national organizations that represent parents, students, and school systems, agreed that extensions to hearing decisions are requested by parties for a variety of reasons, including when (1) weather and school vacations result in school closures; (2) some or all parties (parents, district personnel, attorneys, and expert witnesses) are not available; (3) attorneys or parents require additional time to prepare cases; and (4) the parties involved want to allow additional time to schedule a mediation. Though Education does not gather data on how much time extensions add on average to the dispute resolution process, some stakeholders we spoke with provided examples of extensions that typically ranged from a few weeks to several months. Several, including disability advocates and a state education official, stated that some decisions are extended by up to a year or more. Another noted that timelines for hearing decisions in one state typically get extended four or five times. Stakeholders differed in their views on the extent to which the time added to due process decision timelines by extensions affected the education of children with disabilities. Some stakeholders stated there is likely to be little or no negative effect on children’s education because of IDEA’s “stay put” provision, which generally ensures that children will stay in their current educational placement until a dispute resolution proceeding is completed. However, other stakeholders pointed out that extensions could cause some children not to receive appropriate educational services in a timely manner. For example, one stakeholder commented that for children currently placed in a program under the “stay put” provision, an extended hearing decision could mean the child would continue to receive educational services that may be inappropriate. Another stakeholder commented that extended decisions could also adversely affect children for whom “stay put” does not apply, such as those waiting to be identified for educational services. Because Education’s current measures do not provide clear and complete information on the total amount of time that due process hearing decisions take or the reasons for any time extensions, Education and other stakeholders, such as Congress, lack information about when and whether extended decisions could adversely affect the education of children with disabilities. Further, Education lacks information that could be used to identify trends and patterns within a state or across states that could help Education better target its oversight or monitoring. Lastly, as currently reported, states’ results on this measure may provide Congress with a misleading picture of the amount of time that hearing decisions take, particularly in states with high rates of extensions. Education collects data from states on parental involvement in the education of children with disabilities, but these data are not comparable across states, and as a result Education cannot use these data to target its oversight of states’ dispute resolution activities. One of Congress’ findings in passing IDEA was that decades of research had demonstrated that the education of children with disabilities can be made more effective by strengthening the role and responsibility of parents, and Education has recognized the importance of parental involvement in fostering relationships between parents and educators and preventing special education disputes. Accordingly, Education developed a performance measure for parental involvement and requires states to collect and report the results of this measure annually. Its measure is defined as the percentage of parents with a child receiving special education services who report that schools facilitated parental involvement as a means of improving services and results for children with disabilities, but states collect and analyze this information in different ways. According to Education officials, although IDEA does not specifically require Education to collect parental involvement data, parental involvement is such a critical factor in ensuring children needing special education services are provided such services that they believe it is important for states to collect and report such data. In 2002, the National Center for Special Education Accountability Monitoring (NCSEAM)—a national technical assistance center funded by Education—developed and validated a scale for states to use to measure parental involvement because of the lack of survey instruments designed to obtain parents’ perceptions of schools’ facilitation of their involvement. To date, over half of states and territories use the NCSEAM scale to collect and report data for this measure. Education officials told us they believed states gather data that is meaningful and useful for their own efforts. However, these officials said that Education cannot determine which states provide high quality parental involvement data, nor does Education use these data to monitor and oversee states’ performance in this area and is unable to compare state performance because states have considerable latitude to determine the methodologies they use to collect the data and these methodologies consequently vary across states. As a result, Education is unable to assess the performance of individual states or compare states’ performance on this measure. The lack of comparable parental involvement data from states can be attributed to a variety of factors, according to PACER Center (Parent Advocacy Coalition for Educational Rights), which recently operated the National Parent Technical Assistance Center and conducted annual analyses of states’ parent involvement data for Education. PACER officials stated they found significant variability among states in their survey instruments, sampling and analysis methods, and the performance targets states set for parental involvement. In previous analysis, PACER reported that in 2011, 34 states used a version of the parental involvement surveys developed by NCSEAM, 10 states used their own state-developed instrument, 10 states adapted questions from the NCSEAM or other parent surveys to develop their own surveys, and 3 According to PACER officials, states used a combination of surveys.states’ use of different survey instruments results in parents responding to questions that may represent varying types of parental involvement. They also noted that states varied in how they analyze survey results for the purposes of reporting on Education’s measure. For example, in some states only half of the questions require positive responses for the survey to be scored positive overall for parental involvement on Education’s measure; in others, a much higher percentage of questions require positive responses to be scored positive overall for parental involvement. Ultimately, PACER officials suggested that the meaningfulness of parent involvement data depends on the ability to use it to make valid comparisons across states and this requires that Education establish and require states to adopt consistent data collection and analysis methods. Others have also noted that the lack of consistency in data collection compromises the meaningfulness of the data. For example, a subject matter specialist noted that recent parental involvement data show a wide range of state performance on this measure—from below 20 percent to above 90 percent—raising important questions about validity that may undermine the public’s confidence in the data. The specialist noted the lack of comparability in state data and the recognition among states that Education does not use the data for oversight may discourage states from improving their parental involvement measures and practices. Education officials said they explored the option of revising its parental involvement measure when some states raised issues about the burden of collecting the data but ultimately decided not to change it after encountering significant resistance from parent and advocate groups. Specifically, officials said they informally proposed that states report information about how they address and measure parental involvement in their state without requiring states to use a quantitative measure or targets. Education presented this proposal to a range of stakeholder groups, including state officials, Parent Training and Information Centers, advocates, and parents, among others. Education officials said parents and advocates were strongly opposed to the proposal to eliminate the current measure, suggesting instead that the department require that all states take a standard approach to collecting parent involvement data. For example, one parent advocacy organization stated that comparability of parental involvement data across states is critical and that Education should require a consistent approach to data collection for all states to ensure that the status of parental involvement for all families of children receiving special education services is reflected in their results. However, one organization representing states commented on the burdens of collecting data for the measure, pointing to the costs to states of mailing out parent surveys and noting that most surveys are not returned. On the other hand, conducting parent surveys does not necessarily entail high costs, according to one subject matter specialist who provided comments to Education. She noted that Florida took a number of steps to lower survey costs without compromising data quality, including moving to a web-based survey, with printed survey forms available to parents on request, and suggested that alternatives to costly survey mailings exist and should be considered. Also, PACER officials have stated that requiring a consistent approach to collecting parental involvement data may decrease some of the data collection burden associated with Education’s measure because states would not need to develop their own approaches. Education officials told us they question the usefulness of comparable parental involvement data across states for oversight and pointed to data on the overrepresentation of racial and ethnic groups in special education, among other IDEA measures, that are also not comparable across states. However, in 2013 GAO found these data do not provide a consistent picture on overrepresentation. Specifically, we found that the flexibility to define how states measure overrepresentation resulted in inconsistent definitions and data collection methods across states and recommended that Education adopt a standard approach to measurement for all states. In responding to this recommendation, Education said that it did not have all the information necessary to determine whether it is appropriate to develop a single standard for overrepresentation. Education began soliciting public comments from stakeholders beginning in June 2014 to assist the department in considering the development of such a standard. Leading performance measurement practices state that organizations that have progressed toward results-oriented management use performance information as a basis for decision making and that full benefit of collecting such information is realized only when managers actually use it Uses of performance information to improve results include to manage. monitoring, resource allocation, and identifying and sharing effective practices, among others. The usefulness of performance information also depends, in part, on the extent to which the data are collected using consistent procedures and definitions. Without comparable parental involvement data across states, Education lacks important performance information which limits its ability to oversee states’ dispute resolution activities, including monitoring and identifying problems with parental involvement in states and recommending improvement activities for states to take; recognizing and incentivizing high performance; assessing states’ needs for technical assistance on parental involvement and making appropriate resource decisions; and identifying and helping to share promising parental involvement practices among states. Both IDEA and Education recognize the importance of parental involvement in the education of children with disabilities. Having parents who are appropriately informed and involved in decision-making regarding the education of students with disabilities can lead to the resolution of disputes in a more collaborative manner without the use of formal dispute resolution methods and may result in greater trust between parents and school districts, and earlier, less adversarial dispute resolution. In addition, resolving disagreements before they escalate and become adversarial is in the collective best interest of parents, students, and districts. It is also important for Education to hold states accountable for timely dispute resolution to protect the educational interests of children with disabilities. In particular, it is important that Education have an effective measure of hearing decision timeliness for monitoring states’ dispute resolution performance. However, Education’s measure does not provide clear, complete information about the duration of this process, information which is useful for ensuring effective program monitoring and targeted technical assistance. While Education tracks the number of hearing decisions made within 45 days, without information on the amount of time added to decisions timelines by extensions, Education is limited in its ability to monitor in this area, which could negatively affect children and their families by, for example, delaying the provision of appropriate special education services. Additionally, Education views parental involvement as a critical factor in ensuring children needing special education services are provided such services and for this reason collects parental involvement data from states. However, without making the data more comparable across states, Education may be prevented from rigorously evaluating states’ performance in this area and may be limited in its ability to identify promising practices and effectively target assistance to states in their efforts to resolve disputes at an early state. Additionally, unless Education uses the data it collects, it will not reap their potential benefits in improving performance for the benefit of students and parents. Based on our review, we recommend the Secretary of Education direct the Office of Special Education Programs take the following two actions: 1. To increase transparency regarding the timeliness of due process hearing decisions for Congress and better target its monitoring and technical assistance to states, revise its performance measure to collect information from states on the amount of time that extensions add to due process hearing decisions. 2. To assist its oversight of dispute resolution, take steps to improve the comparability of parental involvement data while minimizing the burden to states, and use the data for better management decision making. Steps to consider could include establishing and requiring that states follow standard data collection and analysis procedures. We provided a draft of this report to Education for review and comment. Education’s comments are produced in appendix I. Education also provided technical comments, which we incorporated into our report where appropriate. Education neither agreed nor disagreed with our recommendations but proposed alternative actions. However, Education’s proposed actions will not effectively address the weaknesses we identified in Education’s performance measures and we continue to believe our recommendations are valid. In its comments, Education recognized the importance of promptly and fairly resolving special education disputes between parents and school districts and agreed that additional information on extensions could be useful in targeting its monitoring and technical assistance activities in states with large numbers of hearings issued within extended timelines. However, Education stated that collecting data from all states and territories on the amount of time that extensions add to hearing timelines would not necessarily improve its capacity to ensure that states and territories are properly implementing IDEA’s dispute resolution procedures. Instead, Education proposed that it conduct follow up monitoring with any state that reports 10 or more fully adjudicated hearings in a given year where at least 75 percent of the decisions are issued with extended timelines. While this approach might be useful for Education’s targeting of monitoring and assistance to states—particularly if monitoring includes collecting information about the duration of extensions, why parties request extensions and the effects of extended timelines on children’s education—we believe Education’s proposal alone will not correct the potentially misleading picture its timeliness measure creates regarding of the amount of time that hearing decisions actually take. As noted in the report, some stakeholders pointed out that extensions could cause some children not to receive appropriate educational services in a timely manner. Thus, we continue to see advantages in addressing the core weakness of its measure by collecting information from all states on the amount of time that extensions add to hearing decision timelines. Further, Education noted that only 12 states and territories had 10 or more fully adjudicated hearings in 2011-12 and stated that it is not appropriate or efficient to burden all states in collecting these data. However, requiring states with fewer hearings to report this information is unlikely to create significant administrative burden for them, as they would be providing information on a small number of decisions. Without reliable performance information, the public lacks a clear picture of the time required to reach due process hearing decisions and the potential impact on affected children. Regarding our recommendation that it improve the comparability of parental involvement data it collects and use the data for better management decision making, Education stated it does not believe there is a need to improve the comparability of states’ parental involvement data. More specifically, Education said that these data are designed to measure state performance against targets that each state sets, based on state-specific needs and circumstances. To improve the quality of parental involvement data, Education said it will work with states through its technical assistance centers to help build their capacity to collect high quality data. We commend Education’s proposal to assist states in this way; however, its approach will continue to require states to report parental involvement data that Education cannot use to assist with oversight and manage for results related to dispute resolution. For example, absent comparable performance information across states, Education could not monitor states with weak performance on parental involvement or identify and assist states in sharing promising parental involvement practices that may help prevent disputes from developing. One approach Education could consider to improve the comparability of parental involvement data would be to establish and require that states follow standard data collection and analysis procedures in reporting the existing measure. Education stated that standardizing the collection of parental involvement data among states would result in increased administrative burden on some states. However, in our report we note that Education’s former national technical assistance center on parental involvement suggested that consistent data collection may, in fact, decrease administrative burden because states would not need to develop their own approaches. Until Education collects data that it can use to effectively manage this effort, it will likely be limited in its ability to enhance collaboration between parents and educators, which facilitates resolving disputes earlier through less formal and costly means. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Jacqueline M. Nowicki, (617) 788-0580 or nowickij@gao.gov. In addition to the contact named above, Betty Ward-Zukerman, Assistant Director; Edward F. Bodine, Analyst-in-Charge; Grace E. Cho, and John S. Townes made significant contributions to this report. Also contributing to this report were James E. Bennett, Deborah K. Bland, David M. Chrisinger, Lara L. Laufer, Benjamin T. Licht, Ying Long, Cady S. Panetta, James M. Rebbe, Carl M. Ramirez and Walter K. Vance.
States receiving IDEA funds must ensure that a free appropriate public education is made available to all children with disabilities, and IDEA has long incorporated formal methods to resolve disputes between parents and school districts. The 2004 reauthorization of IDEA expanded the availability of alternative dispute resolution by broadening the use of voluntary mediation and requiring resolution meetings prior to due process hearings. GAO was asked to examine the use of dispute resolution methods since 2004. In this report GAO (1) examines recent trends in dispute resolution methods, (2) reports stakeholders' views on alternative methods, and (3) assesses Education's related performance measures for states. GAO analyzed federal dispute resolution data from 2004 to 2012, conducted a national survey, compared Education's performance measures to leading practices, and interviewed Education officials and stakeholders selected for their knowledge of dispute resolution. From 2004 through 2012, the number of due process hearings—a formal dispute resolution method and a key indicator of serious disputes between parents and school districts under the Individuals with Disabilities Education Act (IDEA)— substantially decreased nationwide as a result of steep declines in New York, Puerto Rico, and the District of Columbia. Officials in these locations largely attributed these declines to greater use of mediation and resolution meetings—methods that IDEA requires states to implement. Despite the declines, officials in these locations said that higher rates of hearings persisted because of disputes over private school placements or special education services. GAO did not find noteworthy trends in the use of other IDEA dispute resolution methods, including state complaints, mediation, and resolution meetings. States and territories reported on GAO's survey that they used mediation, resolution meetings, and other methods they voluntarily implemented to facilitate early resolution of disputes and to avoid potentially adversarial due process hearings. States, territories, and other stakeholders generally reported on GAO's survey or in interviews that alternative methods are important to resolving disputes earlier. Some stakeholders cited the potential of these methods to improve communication and trust between parents and educators. Some state officials said that a lack of public awareness about the methods they have voluntarily implemented was a challenge to expanding their use, but they were addressing this with various kinds of outreach, such as disseminating information through parent organizations. The Department of Education (Education) uses several measures to assess states' performance on dispute resolution but lacks complete information on timeliness and comparable data on parental involvement. Education requires all states to report the number of due process hearing decisions that were made within 45 days or were extended; however, it does not direct states to report the total amount of time that extensions add to due process hearing decisions. Similarly, Education collects data from states on parental involvement—a key to dispute prevention—but does not require consistent collection and reporting, so the data are not comparable nationwide. Leading performance measurement practices state that successful performance measures should be clearly stated and provide unambiguous information. Without more transparent timeliness data and comparable parental involvement data, Education cannot effectively target its oversight of states' dispute resolution activities. GAO recommends that Education improve measures for overseeing states' dispute resolution performance, including more transparent data on due process hearing decisions and comparable parental involvement data. Education neither agreed nor disagreed with the recommendations and proposed alternative actions. GAO does not believe these proposals will address the weaknesses in Education's performance measures and continues to believe the recommendations remain valid.
To address the need for improved funding, the Pension Protection Act of 2006 (PPA) included new provisions designed to compel multiemployer plans in poor financial shape to take action to improve their long-term financial condition. The law established two categories of troubled plans—endangered status (commonly referred to as “yellow zone”, which includes an additional subcategory of “seriously endangered”) and a more serious critical status (commonly referred to as “red zone”). PPA further requires plans in both categories to develop strategies that include contribution increases, benefit reductions, or both, designed to improve their financial condition. These strategies must generally be adopted through the collective bargaining process, and plans are required to periodically report on progress made in implementing them. Because of the greater severity of critical status plans’ funding condition, such plans have an exception to ERISA’s anti-cutback rule in that they may reduce or eliminate certain so-called “adjustable benefits” such as early retirement benefits, post-retirement death benefits, and disability benefits for participants not yet retired. For example, if an approved rehabilitation plan eliminated an early retirement benefit, appropriate notice was provided, and the reduction is agreed to in collective bargaining, then participants not yet retired would no longer be able to receive early retirement benefits. PPA funding requirements took effect in 2008, just as the nation was entering a severe economic crisis. The dramatic decline in the value of stocks and other financial assets in 2008 and the accompanying recession broadly weakened multiemployer plans’ financial health. In response, Congress enacted the Worker, Retiree, and Employer Recovery Act of 2008 (WRERA) and, later, the Preservation of Access to Care for Medicare Beneficiaries and Pension Relief Act of 2010 (PRA) to provide funding relief to help plans navigate the difficult economic environment. For example, WRERA relief measures allowed multiemployer plans to temporarily freeze their funding status, and extended the timeframe for plans’ funding improvement or rehabilitation plans from 10 to 13 years. Generally, PRA allows a plan that meets certain solvency requirements to amortize investment losses from the 2008 market collapse over 29 years rather than 15 years, and to recognize such losses in the actuarial value of assets over 10 years instead of 5, so the negative effects of the market decline would be spread out over a longer period. Overall, since 2009, the funding status of multiemployer plans has improved, but a sizeable number of plans are still critical or endangered. According to plan-reported data, while the funding status of plans has not returned to 2008 levels, the percentage of plans in critical status declined from 34 percent in 2009 to 24 percent in 2011. The percentage of plans in endangered status declined to a greater extent, from 34 percent in 2009 to 16 percent in 2011. However, despite these improvements, 40 percent of plans have not emerged from critical or endangered status. In addition to the difficulties many multiemployer plans face, the challenges that PBGC faces have led us to designate its insurance programs as a “high-risk” federal program. As we noted earlier this year, because of long term challenges related to PBGC’s funding structure, the agency’s financial future is uncertain. We noted that weaknesses in its revenue streams continue to undermine the agency’s long-term financial stability. According to a 2011 survey of 107 critical status plans conducted by the Segal Company, the large majority of critical status plans have developed rehabilitation plans that both increase required employer contributions and reduce participant benefits in an effort to improve plans’ financial positions. Plan officials explained that these changes can have a range of effects and, in some cases, may severely affect employers and participants. While most critical status plans expect to recover from their current funding difficulties, about 25 percent do not and instead seek to delay eventual insolvency. The 2011 survey showed the large majority of critical status plans surveyed developed rehabilitation plans that included a combination of both contribution increases and benefit reductions to be implemented in the coming years. Of plans surveyed, 81 proposed increases in employer contributions and reductions to participant benefits, while 14 proposed contribution increases only and 7 proposed benefit reductions only. The magnitude of contribution increases and benefit reductions varied widely among plans. As Figure 1 illustrates, the rehabilitation plans of 7 critical status plans proposed no contribution increases, while those of 28 plans proposed first year increases of 20 percent or more. It is important to note that these data tell only a part of the story because some rehabilitation plans call for additional contribution increases in subsequent years. The vast majority of multiemployer plans surveyed developed rehabilitation plans that reduced benefit accruals and/or adjustable benefits in an effort to improve the financial condition of the plan. Thirty- two of the 107 multiemployer plans surveyed proposed, in their rehabilitation plans, to reduce accrual rates, and of these, the large majority proposed to cut accruals by more than 20 percent. Fifteen plans proposed to cut accruals by 40 percent or more. This doesn’t reflect all the changes plans made, because some plans reduced accrual rates prior to development of the rehabilitation plans. Furthermore, a majority of plans—88 out of 107—proposed to reduce one or more adjustable benefits. Typically, these reductions will apply to both active and vested inactive participants, but some plans applied them to only one participant group. While the data are informative, they do not get to the heart of the issue— what impact will these changes have on employers, participants, and plans themselves? As might be expected, the impacts on employers and participants will vary among plans. In some cases, employers and participants will be able to bear these changes without undue hardship. In other cases, the impacts were expected to be significant. For example, plan officials said employers outside the plan generally do not offer comparable pension or health insurance benefits, and increases in contributions puts contributing employers at a significant competitive disadvantage. Similarly, an official of a long-distance trucking firm said high contribution rates have greatly affected the firm’s cost structure and damaged its competitive position. In other cases, plans may have been unable to increase employer contribution rates as much as needed. For example, our review of one rehabilitation plan revealed that a 15 percent contribution increase resulted from a difficult balance between, among other factors, adequately funding the plan and avoiding excessive strain on contributing employers. According to the plan administrator, plan trustees determined many employers were in financial distress and a significant increase in contributions would likely lead to business failures or numerous withdrawals. Subsequently, five employers withdrew from the plan after the rehabilitation plan was adopted. Similarly, the reduction or elimination of adjustable benefits were significant and controversial for participants in some cases. Officials of several plans stated the reduction or elimination of early retirement benefits for participants working in physically demanding occupations would be particularly difficult for some workers. At the same time, some plans also eliminated or imposed limitations on disability retirement so workers who have developed physical limitations will have to either continue to work or retire on substantially reduced benefits. Importantly, while most plans expected to emerge from critical status eventually, a significant number did not and instead project eventual insolvency. According to the Segal survey, of 107 critical status plans, 67 expect to emerge from critical status within the statutory timeframes of 10 to 13 years, and 12 others in an extended rehabilitation period (See figure 2). However, 28 of the plans had determined that no realistic combination of contribution increases and benefit reductions would enable them to emerge from critical status, and their best approach is to forestall insolvency for as long as possible. Among these plans, the average number of years to expected insolvency was 12, with some expecting insolvency in less than 5 years and others not for more than 30 years. The majority of these plans expected insolvency in 15 or fewer years. Our contacts with individual plans provide insight into the stark choices faced by these plans. Four of the eight critical status plans we contacted expected to eventually become insolvent, and officials explained that their analyses concluded that no feasible combination of contribution increases or benefit reductions could lead them back to a healthy level of funding. Several indicated that efforts to do so would likely accelerate the demise of the plan. For example, plan documents noted that the actuary of one plan determined the plan would be able to emerge from critical status if contribution rates were increased by 24 percent annually for each of the next 10 years—a total increase of more than 850 percent. The trustees of this plan determined such a proposal would be rejected by both employers and workers, and would likely lead to negotiated withdrawals by employers. This, in turn, could result in insolvency of the plan, possibly as early as 2019. Instead, this plan opted for measures that officials believed are most likely to result in continued participation in the plan, which nonetheless are projected to forestall insolvency until about 2029. Similarly, according to officials of another plan, plan trustees concluded that the contribution increases necessary to avoid insolvency were more than employers in that geographic area could bear. In addition, the plan considered the impact of funding the necessary contribution increases through reductions to base pay. The plan found this infeasible because of the rising cost of living facing employees and their families. Consequently, the plan trustees adopted a rehabilitation plan forestalling insolvency until about 2025. In recent years, the total amount of financial assistance PBGC has provided to insolvent plans has increased markedly. From fiscal years 2006 to 2012, the number of plans needing PBGC’s help has increased significantly, from 33 plans to 49 plans. For fiscal year 2012 alone, PBGC provided $95 million in total financial assistance to help 49 insolvent plans provide benefits to about 51,000 retirees. Loans comprise the majority of financial assistance that PBGC has provided to insolvent multiemployer plans. Based on available data from fiscal year 2011, loans totaled $85.5 million and accounted for nearly 75 percent of total financial assistance. However, the loans are not likely to be repaid because most plans never return to solvency. To date, only one plan has ever repaid a loan. PBGC monitors the financial condition of multiemployer plans to identify plans that are at risk of becoming insolvent—possibly requiring financial assistance. Based on this monitoring, PBGC maintains a contingency list of plans likely to make an insolvency claim, and classifies plans according to the plans’ risk of insolvency. PBGC also assesses the potential effect on the multiemployer insurance fund that insolvencies among the plans on the contingency list would have. Table 1 outlines the various classifications and definitions based on risk and shows the liability associated with such plans. Both the number of plans placed on the contingency list and the amount of potential financial assistance have increased steadily over time, with the greatest increases recorded in recent years. According to PBGC data, the number of plans where insolvency is classified as “probable”—plans that are already insolvent or are projected to become insolvent generally within 10 years—increased from 90 plans in fiscal year 2008 to 148 plans in fiscal year 2012. Similarly, the number of plans where insolvency is classified as “reasonably possible”—plans that are projected to become insolvent generally between 10 and 20 years in the future—increased from 1 in fiscal year 2008 to 13 in fiscal year 2012. Although the increase in the number of multiemployer plans on the contingency list has risen sharply, the present value of PBGC’s potential liability to those plans has increased by an even greater factor. For example, the present value of PBGC’s liability associated with “probable” plans increased from $1.8 billion in fiscal year 2008 to $7.0 billion in fiscal year 2012 (see fig. 3). By contrast, for fiscal year 2012, PBGC’s multiemployer insurance fund only had $1.8 billion in total assets, resulting in net liability of $5.2 billion, as reported in PBGC’s 2012 annual report. Although PBGC’s cash flow is currently positive—because premiums and investment returns on the multiemployer insurance fund assets exceed benefit payments and other assistance—PBGC expects plan insolvencies to more than double by 2017, placing greater demands on the insurance fund and further weakening PBGC’s overall financial position. PBGC expects the liabilities associated with current and future plan insolvencies that are likely to occur in the next 10 years to exhaust the insurance fund by about 2023. Further, insolvency may be hastened by the projected insolvencies of two very large multiemployer plans whose financial condition has greatly deteriorated in recent years. According to PBGC officials, the two large plans for which insolvency is “reasonably possible” have projected insolvency between 10 to 20 years in the future. Importantly, the PBGC’s projection of program insolvency by 2023 does not account for the impact of these two plans because their projected insolvency is more than 10 years in the future. PBGC estimates that, for fiscal year 2012, the liability from these plans accounted for about $26 billion of the $27 billion in liability of plans in the “Reasonably Possible” category. Taken in combination, the number of retirees and beneficiaries of these two plans would represent about a six-fold increase in the number of people receiving guarantee payments in 2012. PBGC estimates that the insolvency of either of these two large plans would exhaust the insurance fund in 2 to 3 years. Generally, retirees who are participants in insolvent plans receive reduced benefits under PBGC’s statutory guarantee. When a multiemployer plan becomes insolvent and relies on PBGC loans to make benefit payments to plan retirees, retirees will most likely see a reduction in their monthly benefits. PBGC calculates the maximum benefit guarantee based on the amount of a participant’s benefit accrual rate and years of credit service earned (see figure 4). For example, if a retiree has earned 30 years of credit service, the maximum coverage under the guarantee is about $1,073 per month, yielding an annual benefit of $12,870. Generally, retirees receiving the highest benefits experience the steepest cuts when their plans become insolvent and their benefits are limited by the pension guarantees. According to PBGC, the average monthly benefit received in all multiemployer plans in 2009 was $821. However, according to a PBGC analysis of benefit distributions among retirees of an undisclosed large plan, the range of benefits varies widely across retirees. About half of this plan’s retirees will experience 15 percent or greater reductions in their benefits under the guarantee. Additionally, according to PBGC, one out of five retirees of this plan will experience 50 percent or greater reductions in their benefits under the guarantee. Ultimately, regardless of how long a retiree has worked and the amount of monthly benefits earned, any reduction in benefits—no matter the amount—may have significant effects on retirees’ living standards. In the event that the multiemployer insurance fund is exhausted, participants relying on the guarantee would receive a small fraction of their already-reduced benefit. Because PBGC does not have statutory authority to raise revenue from any other source, officials said that, once the fund is depleted, the agency would have to rely solely on annual insurance premium receipts from multiemployer plans (which totaled $92 million for fiscal year 2012). The precise effect that the insolvency of the insurance fund would have on retirees receiving the guaranteed benefit depends on a number of factors—primarily the number of guaranteed benefit recipients and PBGC’s annual premium income at that time. However, the impact would likely be severe. For example, if the fund were to be drained by the insolvency of a very large and troubled plan, we estimate the benefits paid by PBGC would be reduced to less than 10 percent of the guarantee level. In this scenario, a retiree who once received monthly benefit of $2,000 and whose benefit was reduced to $1,251 under the guarantee would see monthly income further reduced to less than $125, or less than $1,500 per year. Additional plan insolvencies would further depress already drastically reduced income levels. Despite unfavorable economic conditions, most multiemployer plans are currently in adequate financial condition and may remain so for many years. However, a substantial number of plans, including some very large plans, are facing very severe financial difficulties. Many of these plans reported that no realistic combination of contribution increases or allowable benefit reductions—options available under current law to address their financial condition—will enable them to emerge from critical status. While the multiemployer system was designed to have employers serve as principal guarantors against plan insolvency, PBGC remains the guarantor of last resort. However, given their current financial challenges, neither the troubled multiemployer plans nor PBGC currently have the flexibility or financial resources to mitigate the effects of anticipated insolvencies. Should a critical mass of plan insolvencies drain the multiemployer insurance fund, PBGC will not be able to pay current and future retirees more than a very small fraction of the benefit they were promised. Consequently, a substantial loss of income in old age looms as a real possibility for the hundreds of thousands of workers and retirees depending on these plans. In a matter of weeks, we will be releasing a report that goes into greater detail about the issues I have discussed in this testimony, and includes possible actions Congress can take to prevent a catastrophic loss of retirement income for hundreds of thousands of retirees who have spent years often in dangerous occupations and in some of the nation’s most vital industries. This concludes my prepared statement. I would be happy to answer any questions the committee may have. Charles Jeszeck, 202-512-7215. In addition to the above, Michael Hartnett, Sharon Hermes, Kun-Fang Lee, David Lehrer, Sheila McCoy, and Frank Todisco made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Multiemployer pension plans--created by collective bargaining agreements including more than one employer--cover more than 10 million workers and retirees, and are insured by the PBGC. As a result of investment market declines, employers withdrawing from plans and demographic challenges in recent years, many multiemployer plans have had large funding shortfalls and face an uncertain future. Also, both PBGC's single-employer and multiemployer insurance programs have been on GAO's list of high-risk federal programs for a number of years. This testimony provides information on (1) recent actions that multiemployer plans in the worst financial condition have taken to improve their funding levels; and (2) the extent to which plans have relied on PBGC assistance since 2009, and the financial condition of PBGC's multiemployer plan insurance program. GAO analyzed government and industry data, interviewed representatives of selected pension plans, and a wide range of industry experts and stakeholders and reviewed relevant federal laws, regulations, and documentation from plans. GAO is not making recommendations in this testimony. GAO will soon release a separate report on multiemployer pension issues. The most severely distressed multiemployer plans have taken significant steps to address their funding problems and, while most plans expected improved financial health, some did not. A survey conducted by a large actuarial and consulting firm serving multiemployer plans suggests that the majority of the most severely underfunded plans--those designated as being in critical status--developed plans to increase employer contributions or reduce certain participant benefits. In some cases, these measures will have significant effects on employers and participants. For example, one plan representative stated that contribution increases had damaged some firms' competitive position in the industry. Similarly, reductions or limitations on certain benefits--such as disability benefits--may create hardships for some older workers, such as those with physically demanding jobs. Most of the 107 surveyed plans expected to emerge from critical status, but about 26 percent did not and instead seek to delay eventual insolvency. The Pension Benefit Guaranty Corporation's (PBGC) financial assistance to multiemployer plans continues to increase, and plan insolvencies threaten PBGC's multiemployer insurance fund. As a result of current and anticipated financial assistance, the present value of PBGC's liability for plans that are insolvent or expected to become insolvent within 10 years increased from $1.8 to $7.0 billion between fiscal years 2008 and 2012. Yet PBGC's multiemployer insurance fund only had $1.8 billion in total assets in 2012. PBGC officials said that financial assistance to these plans would likely exhaust the fund in or about 2023. If the fund is exhausted, many retirees will see their pension benefits reduced to a small fraction of their original value because only a reduced stream of insurance premium payments will be available to pay benefits.
Consistent with the premise that physicians play a central role in the generation of most health care expenditures, some health care purchasers employ physician profiling to promote efficiency. We selected 10 health care purchasers that profiled physicians in their networks—that is, compared physicians’ performance to an efficiency standard to identify those who practiced inefficiently. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians’ patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most purchasers said they also evaluated physicians on quality. The purchasers linked their efficiency profiling results and other measures to a range of physician-focused strategies to encourage the efficient provision of care. Some of the purchasers said their profiling efforts produced savings. The 10 health care purchasers we examined used two basic profiling approaches to identify physicians whose medical practices were inefficient. One approach focused on the costs associated with treating a specific episode of illness—such as a stroke or heart attack. The other approach focused on costs, within a specific period, associated with the patients in a physician’s practice. Both approaches used information from medical claims data to measure resource use and account for differences in patients’ health status. In addition, both approaches assessed physicians (or physician groups) based on the costs associated with services that they may not have provided directly, such as costs associated with a hospitalization or services provided by a different physician. Although the methods used by purchasers to predict patient spending varied, all used patient demographics and diagnoses. The methods they used generally computed efficiency measures as the ratio of actual to expected spending for patients of similar health status. In addition, all of the purchasers we interviewed profiled specialists and all but one also profiled primary care physicians. Several purchasers said they would only profile physicians who treated an adequate number of cases, since such analyses typically require a minimum sample size to be valid. The health care purchasers we examined directly tied the results of their profiling methods to incentives that encourage physicians in their networks to practice efficiently. The incentives varied widely in design, application, and severity of consequences. Purchasers used incentives that included educating physicians to encourage more efficient care, designating in their physician directories those physicians who met efficiency and quality standards, dividing physicians into tiers based on efficiency and giving enrollees financial incentives to see physicians in particular tiers, providing bonuses or imposing penalties based on efficiency and quality excluding inefficient physicians from the network. Evidence from our interviews with the health care purchasers suggests that physician profiling programs may have the potential to generate savings for health care purchasers. Three of the 10 purchasers reported that the profiling programs produced savings and provided us with estimates of savings attributable to their physician-focused efficiency efforts. For example, 1 of those purchasers reported that growth in spending fell from 12 percent to about 1 percent in the first year after it restructured its network as part of its efficiency program, and an actuarial firm hired by the purchaser estimated that about three quarters of the reduction in expenditure growth was most likely a result of the efficiency program. Three other purchasers suggested their programs might have achieved savings but did not provide savings estimates, while four said they had not attempted to measure savings at the time of our interviews. Having considered the efforts of other health care purchasers in profiling physicians for efficiency, we conducted our own profiling analysis of physician practices in Medicare and found individual physicians who were likely to practice medicine inefficiently in each of 12 metropolitan areas studied. We focused our analysis on generalists—physicians who described their specialty as general practice, internal medicine, or family practice. We did not include specialists in our analysis. We selected areas that were diverse geographically and in terms of Medicare spending per beneficiary. Under our methodology, we computed the percentage of overly expensive patients in each physician’s Medicare practice. To identify overly expensive patients, we grouped the Medicare beneficiaries in the 12 locations according to their health status, using diagnosis and demographic information. Patients whose total Medicare expenditures— for services provided by all health providers, not just physicians—far exceeded those of other patients in their same health status grouping were classified as overly expensive. Once these patients were identified and linked to the physicians who treated them, we were able to determine which physicians treated a disproportionate share of these patients compared with their generalist peers in the same location. We classified these physicians as outliers—that is, physicians whose proportions of overly expensive patients would occur by chance less than 1 time in 100. We concluded that these outlier physicians were likely to be practicing medicine inefficiently. Based on 2003 Medicare claims data, our analysis found outlier generalist physicians in all 12 metropolitan areas we studied. In two of the areas, outlier generalists accounted for more than 10 percent of the area’s generalist physician population. In the remaining areas, the proportion of outlier generalists ranged from 2 percent to about 6 percent of the area’s generalist population. Medicare’s data-rich environment is conducive to identifying physicians who are likely to practice medicine inefficiently. Fundamental to this effort is the ability to make statistical comparisons that enable health care purchasers to identify physicians practicing outside of established standards. CMS has the tools to make statistically valid comparisons, including comprehensive medical claims information, sufficient numbers of physicians in most areas to construct adequate sample sizes, and methods to adjust for differences in patient health status. Among the resources available to CMS are the following: Comprehensive source of medical claims information. CMS maintains a centralized repository, or database, of all Medicare claims that provides a comprehensive source of information on patients’ Medicare-covered medical encounters. Using claims from the central database, each of which includes the beneficiary’s unique identification number, CMS can identify and link patients to the various types of services they received and to the physicians who treated them. Data samples large enough to ensure meaningful comparisons across physicians. The feasibility of using efficiency measures to compare physicians’ performance depends, in part, on two factors: the availability of enough data on each physician to compute an efficiency measure and numbers of physicians large enough to provide meaningful comparisons. In 2005, Medicare’s 33.6 million fee-for-service enrollees were served by about 618,800 physicians. These figures suggest that CMS has enough clinical and expenditure data to compute efficiency measures for most physicians billing Medicare. Methods to account for differences in patient health status. Because sicker patients are expected to use more health care resources than healthier patients, the health status of patients must be taken into account to make meaningful comparisons among physicians. Medicare has significant experience with risk adjustment. Specifically, CMS has used increasingly sophisticated risk adjustment methodologies over the past decade to set payment rates for beneficiaries enrolled in managed care plans. To conduct profiling analyses, CMS would likely make methodological decisions similar to those made by the health care purchasers we interviewed. For example, the health care purchasers we spoke with made choices about whether to profile individual physicians or group practices; which risk adjustment tool was best suited for a purchaser’s physician and enrollee population; whether to measure costs associated with episodes of care or the costs, within a specific time period, associated with the patients in a physician’s practice; and what criteria to use to identify inefficient practice patterns. Our experience in examining what health care purchasers other than Medicare are doing to improve physician efficiency and in analyzing Medicare claims has enabled us to gain some insights into the potential of physician profiling to improve Medicare program efficiency. A primary virtue of profiling is that, coupled with incentives to encourage efficiency, it can create a system that operates at the individual physician level. In this way, profiling can address a principal criticism of the SGR system, which only operates at the aggregate physician level. Although savings from physician profiling alone would clearly not be sufficient to correct Medicare’s long-term fiscal imbalance, it could be an important part of a package of reforms aimed at future program sustainability. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or the subcommittee members may have. For future contacts regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101 or at steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include James Cosgrove and Phyllis Thorburn, Assistant Directors; Todd Anderson; Alex Dworkowitz; Hannah Fein; Gregory Giusto; Richard Lipinski; and Eric Wedum. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicare's current system of spending targets used to moderate spending growth for physician services and annually update physician fees is problematic. This spending target system--called the sustainable growth rate (SGR) system--adjusts physician fees based on the extent to which actual spending aligns with specified targets. In recent years, because spending has exceeded the targets, the system has called for fee cuts. Since 2003, the cuts have been averted through administrative or legislative action, thus postponing the budgetary consequences of excess spending. Under these circumstances, policymakers are seeking reforms that can help moderate spending growth while ensuring that beneficiaries have appropriate access to care. For today's hearing, the Subcommittee on Health, House Committee on Energy and Commerce, which is exploring options for improving how Medicare pays physicians, asked GAO to share the preliminary results of its ongoing study related to this topic. GAO's statement addresses (1) approaches taken by other health care purchasers to address physicians' inefficient practice patterns, (2) GAO's efforts to estimate the prevalence of inefficient physicians in Medicare, and (3) the methodological tools available to identify inefficient practice patterns programwide. GAO ensured the reliability of the claims data used in this report by performing appropriate electronic data checks and by interviewing agency officials who were knowledgeable about the data. Consistent with the premise that physicians play a central role in the generation of health care expenditures, some health care purchasers examine the practice patterns of physicians in their network to promote efficiency. GAO selected 10 health care purchasers for review because they assess physicians' performance against an efficiency standard. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians' patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most purchasers said they also evaluated physicians on quality. The purchasers linked their efficiency analysis results and other measures to a range of strategies--from steering patients toward the most efficient providers to excluding a physician from the purchaser's provider network because of poor performance. Some of the purchasers said these efforts produced savings. Having considered the efforts of other health care purchasers in evaluating physicians for efficiency, GAO conducted its own analysis of physician practices in Medicare. GAO used the term efficiency to mean providing and ordering a level of services that is sufficient to meet patients' health care needs but not excessive, given a patient's health status. GAO focused the analysis on generalists--physicians who described their specialty as general practice, internal medicine, or family practice--and selected metropolitan areas that were diverse geographically and in terms of Medicare spending per beneficiary. GAO found that individual physicians who were likely to practice medicine inefficiently were present in each of 12 metropolitan areas studied. The Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, also has the tools to identify physicians who are likely to practice medicine inefficiently. Specifically, CMS has at its disposal comprehensive medical claims information, sufficient numbers of physicians in most areas to construct adequate sample sizes, and methods to adjust for differences in beneficiary health status. A primary virtue of examining physician practices for efficiency is that the information can be coupled with incentives that operate at the individual physician level, in contrast with the SGR system, which operates at the aggregate physician level. Efforts to improve physician efficiency would not, by themselves, be sufficient to correct Medicare's long-term fiscal imbalance, but such efforts could be an important part of a package of reforms aimed at future program sustainability.